1. 程式人生 > >but there is no HDFS_NAMENODE_USER defined

but there is no HDFS_NAMENODE_USER defined

b- hdf tin inf details spi virt -i all

看著書嘗試安裝一下Hadoop服務遇到了如下報錯:
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
處理:
在/usr/local/hadoop-3.0.2/sbin/start-dfs.sh中添加報錯中的“HDFS_NAMENODE_USER=root”

hadoop下載地址 http://archive.apache.org/dist/hadoop/core/
報錯
[root@web78 hadoop-1.2.1]# bin/hadoop jar hadoop-examples-1.2.1.jar pi 10 50

Number of Maps = 10
Samples per Map = 50
java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:567)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:318)
at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:265)
at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
參考資料
https://blog.csdn.net/weiyongle1996/article/details/74094989/
處理

檢查在core-site.xml中配置的hadoop.tmp.dir對應目錄發現是空的
停止集群 bin/stop-all.sh
重新格式化 ./hadoop namenode -format
重啟集群 ./start-all.sh

按照上面的操作完成調試之後重新執行的成功結果如下:
[root@web78 hadoop-1.2.1]# bin/hadoop jar hadoop-examples-1.2.1.jar pi 10 50
Number of Maps = 10
Samples per Map = 50
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
18/06/08 16:15:25 INFO mapred.FileInputFormat: Total input paths to process : 10
18/06/08 16:15:26 INFO mapred.JobClient: Running job: job_201806081614_0001
18/06/08 16:15:27 INFO mapred.JobClient: map 0% reduce 0%
18/06/08 16:15:41 INFO mapred.JobClient: map 20% reduce 0%
18/06/08 16:15:50 INFO mapred.JobClient: map 40% reduce 0%
18/06/08 16:16:11 INFO mapred.JobClient: map 50% reduce 0%
18/06/08 16:16:14 INFO mapred.JobClient: map 70% reduce 0%
18/06/08 16:16:15 INFO mapred.JobClient: map 80% reduce 0%
18/06/08 16:16:32 INFO mapred.JobClient: map 90% reduce 0%
18/06/08 16:16:34 INFO mapred.JobClient: map 90% reduce 26%
18/06/08 16:16:37 INFO mapred.JobClient: map 100% reduce 26%
18/06/08 16:16:40 INFO mapred.JobClient: map 100% reduce 30%
18/06/08 16:16:45 INFO mapred.JobClient: map 100% reduce 100%
18/06/08 16:16:46 INFO mapred.JobClient: Job complete: job_201806081614_0001
18/06/08 16:16:46 INFO mapred.JobClient: Counters: 30
18/06/08 16:16:46 INFO mapred.JobClient: Job Counters
18/06/08 16:16:46 INFO mapred.JobClient: Launched reduce tasks=1
18/06/08 16:16:46 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=214169
18/06/08 16:16:46 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
18/06/08 16:16:46 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
18/06/08 16:16:46 INFO mapred.JobClient: Launched map tasks=10
18/06/08 16:16:46 INFO mapred.JobClient: Data-local map tasks=10
18/06/08 16:16:46 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=63724
18/06/08 16:16:46 INFO mapred.JobClient: File Input Format Counters
18/06/08 16:16:46 INFO mapred.JobClient: Bytes Read=1180
18/06/08 16:16:46 INFO mapred.JobClient: File Output Format Counters
18/06/08 16:16:46 INFO mapred.JobClient: Bytes Written=97
18/06/08 16:16:46 INFO mapred.JobClient: FileSystemCounters
18/06/08 16:16:46 INFO mapred.JobClient: FILE_BYTES_READ=226
18/06/08 16:16:46 INFO mapred.JobClient: HDFS_BYTES_READ=2430
18/06/08 16:16:46 INFO mapred.JobClient: FILE_BYTES_WRITTEN=616854
18/06/08 16:16:46 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
18/06/08 16:16:46 INFO mapred.JobClient: Map-Reduce Framework
18/06/08 16:16:46 INFO mapred.JobClient: Map output materialized bytes=280
18/06/08 16:16:46 INFO mapred.JobClient: Map input records=10
18/06/08 16:16:46 INFO mapred.JobClient: Reduce shuffle bytes=280
18/06/08 16:16:46 INFO mapred.JobClient: Spilled Records=40
18/06/08 16:16:46 INFO mapred.JobClient: Map output bytes=180
18/06/08 16:16:46 INFO mapred.JobClient: Total committed heap usage (bytes)=2035286016
18/06/08 16:16:46 INFO mapred.JobClient: CPU time spent (ms)=8210
18/06/08 16:16:46 INFO mapred.JobClient: Map input bytes=240
18/06/08 16:16:46 INFO mapred.JobClient: SPLIT_RAW_BYTES=1250
18/06/08 16:16:46 INFO mapred.JobClient: Combine input records=0
18/06/08 16:16:46 INFO mapred.JobClient: Reduce input records=20
18/06/08 16:16:46 INFO mapred.JobClient: Reduce input groups=20
18/06/08 16:16:46 INFO mapred.JobClient: Combine output records=0
18/06/08 16:16:46 INFO mapred.JobClient: Physical memory (bytes) snapshot=1749561344
18/06/08 16:16:46 INFO mapred.JobClient: Reduce output records=0
18/06/08 16:16:46 INFO mapred.JobClient: Virtual memory (bytes) snapshot=7826571264
18/06/08 16:16:46 INFO mapred.JobClient: Map output records=20
Job Finished in 81.206 seconds
Estimated value of Pi is 3.16000000000000000000

創建目錄 bin/hadoop dfs -mkdir /hadoop/word
傳輸文件 bin/hadoop fs -put /root/hadoop/input.txt /hadoop/word/
查看上傳的文件 bin/hadoop dfs -ls /hadoop/word
cd 存放測試代碼目錄包含(統計文件input.txt,map代碼mapper.py,reduce代碼reducer.py)
/usr/local/hadoop-1.2.1/bin/hadoop jar /usr/local/hadoop-1.2.1/contrib/streaming/hadoop-streaming-1.2.1.jar -file ./mapper.py -mapper ./mapper.py -file ./reducer.py -reducer ./reducer.py -input /hadoop/word -output /hadoop/output

下載mrjob-0.4.2 https://pypi.org/project/mrjob/0.4.2/#files

but there is no HDFS_NAMENODE_USER defined