Centos 虛擬機器 Hadoop個人主機命令簡記 (10.5~10.11每週總結)
阿新 • • 發佈:2020-10-13
一丶 指令碼命令
1. xsync: 同步三臺主機的檔案 修改一臺檔案後 使用該命令 可以實現多臺的同步修改
使用方法 xsync +檔案路徑 +目標檔案路徑
#!/bin/bash #1. 判斷引數 if [ $# -lt 1 ] then echo Not Enough Arguement! exit; fi #2. 遍歷叢集所有機器 for host in hadoop102 hadoop103 hadoop104 do echo ================= $host ================ #3. 遍歷所有目錄,挨個傳送 for file in $@ do #4 判斷檔案是否存在 if [ -e $file ] then #5. 獲取父目錄 pdir=$(cd -P $(dirname $file); pwd) #6. 獲取當前檔名稱 fname=$(basename $file) ssh $host "mkdir -p $pdir" rsync -av $pdir/$fname $host:$pdir else echo $file does not exists! fi done done
2. mycluster.sh: 群起或者群停 Hadoop 結點
使用 : mycluster.sh start/stop
#!/bin/bash if [ $# -lt 1 ] then echo "Input Args Error!!!!" exit fi case $1 in start) echo "========== START HDFS ==========" ssh hadoop102 /opt/module/hadoop-3.1.3/sbin/start-dfs.sh echo "========== START YARN ==========" ssh hadoop103 /opt/module/hadoop-3.1.3/sbin/start-yarn.sh ;; stop) echo "========== STOP YARN ==========" ssh hadoop103 /opt/module/hadoop-3.1.3/sbin/stop-yarn.sh echo "========== STOP HDFS ==========" ssh hadoop102 /opt/module/hadoop-3.1.3/sbin/stop-dfs.sh ;; *) echo "Input Args Error!!!!" ;; esac
3. myjps: 檢視三個結點的資料
執行操作:myjps.sh
#!/bin/bash if [ $# -lt 1 ] then echo "Input Args Error!!!!" exit fi case $1 in start) echo "========== START HDFS ==========" ssh hadoop102 /opt/module/hadoop-3.1.3/sbin/start-dfs.sh echo "========== START YARN ==========" ssh hadoop103 /opt/module/hadoop-3.1.3/sbin/start-yarn.sh ;; stop) echo "========== STOP YARN ==========" ssh hadoop103 /opt/module/hadoop-3.1.3/sbin/stop-yarn.sh echo "========== STOP HDFS ==========" ssh hadoop102 /opt/module/hadoop-3.1.3/sbin/stop-dfs.sh ;; *) echo "Input Args Error!!!!" ;; esac [atguigu@hadoop102 bin]$ cat myjps.sh #!/bin/bash for i in hadoop102 hadoop103 hadoop104 do echo "========== $i jps ==========" ssh $i /opt/module/jdk1.8.0_212/bin/jps done
二丶HDFS 操作簡記
1. hadoop fs -help 幫助命令:列出所有命令幫助資訊
hadoop fs -help (具體命令) 可以看到具體命令幫助資訊
2.hadoop fs -ls [hdfs當中的目錄]: 顯示目錄下的所有檔案和檔案的許可權
hadoop fs -ls -R [hdfs當中的目錄]: 顯示目錄下的所有檔案
3. hadoop fs -cat [hdfs 檔案路徑] : 檢視檔案內容
4.hadoop fs -mkdir [hdfs 儲存路徑] : 建立目錄
hadoop fs -mkdir -p [hdfs 儲存路徑] : 建立多級目錄
5.hadoop fs -rm [檔案路徑] :刪除檔案
hadoop fs -rm -r [檔案路徑] :刪除資料夾
6.hadoop fs -put [檔案路徑]: 複製檔案
7. hadoop fs -cp [URI] [URI]: 將檔案從源路徑複製到目標路徑
三丶IDEA 連線Hadoop 需要的Moven專案
<dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <version>2.12.0</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>3.1.3</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-yarn-common</artifactId> <version>3.1.3</version> </dependency> </dependencies>