hadoop job yarn 命令
命令列工具 •
1.檢視 Job 資訊:
hadoop job -list
2.殺掉 Job:
hadoop job –kill job_id
3.指定路徑下檢視歷史日誌彙總:
hadoop job -history output-dir
4.作業的更多細節:
hadoop job -history all output-dir
5.列印map和reduce完成百分比和所有計數器:
hadoop job –status job_id
6.殺死任務。被殺死的任務不會不利於失敗嘗試:
hadoop jab -kill-task <task-id>
7.使任務失敗。被失敗的任務會對失敗嘗試不利:
hadoop job -fail-task <task-id>
YARN命令列:
YARN命令是呼叫bin/yarn指令碼檔案,如果執行yarn指令碼沒有帶任何引數,則會列印yarn所有命令的描述。
使用: yarn [--config confdir] COMMAND [--loglevel loglevel] [GENERIC_OPTIONS] [COMMAND_OPTIONS]
YARN有一個引數解析框架,採用解析泛型引數以及執行類。
表A:
使用者命令:
對於Hadoop叢集使用者很有用的命令:
application
使用: yarn application [options]
示例1:
[[email protected] bin]$ ./yarn application -list -appStates ACCEPTED
15/08/10 11:48:43 INFO client.RMProxy: Connecting to ResourceManager at hadoop1/10.0.1.41:8032
Total number of applications (application-types: [] and states: [ACCEPTED]):1
Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL
application_1438998625140_1703 MAC_STATUS MAPREDUCE hduser default ACCEPTED UNDEFINED 0% N/A
示例2:
[[email protected] bin]$ ./yarn application -list
15/08/10 11:43:01 INFO client.RMProxy: Connecting to ResourceManager at hadoop1/10.0.1.41:8032
Total number of applications (application-types: [] and states: [SUBMITTED, ACCEPTED, RUNNING]):1
Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL
application_1438998625140_1701 MAC_STATUS MAPREDUCE hduser default ACCEPTED UNDEFINED 0% N/A
示例3:
[[email protected] bin]$ ./yarn application -kill application_1438998625140_1705
15/08/10 11:57:41 INFO client.RMProxy: Connecting to ResourceManager at hadoop1/10.0.1.41:8032
Killing application application_1438998625140_1705
15/08/10 11:57:42 INFO impl.YarnClientImpl: Killed application application_1438998625140_1705
applicationattempt
使用: yarn applicationattempt [options]
示例1:
[[email protected] bin]$ yarn applicationattempt -list application_1437364567082_0106
15/08/10 20:58:28 INFO client.RMProxy: Connecting to ResourceManager at hadoopcluster79/10.0.1.79:8032
Total number of application attempts :1
ApplicationAttempt-Id State AM-Container-Id Tracking-URL
appattempt_1437364567082_0106_000001 RUNNING container_1437364567082_0106_01_000001 http://hadoopcluster79:8088/proxy/application_1437364567082_0106/
示例2:
[[email protected] bin]$ yarn applicationattempt -list application_1437364567082_0106
15/08/10 20:58:28 INFO client.RMProxy: Connecting to ResourceManager at hadoopcluster79/10.0.1.79:8032
Total number of application attempts :1
ApplicationAttempt-Id State AM-Container-Id Tracking-URL
appattempt_1437364567082_0106_000001 RUNNING container_1437364567082_0106_01_000001 http://hadoopcluster79:8088/proxy/application_1437364567082_0106/
classpath使用: yarn classpath
列印需要得到Hadoop的jar和所需要的lib包路徑
[[email protected] bin]$ yarn classpath
/home/hadoop/apache/hadoop-2.4.1/etc/hadoop:/home/hadoop/apache/hadoop-2.4.1/etc/hadoop:/home/hadoop/apache/hadoop-2.4.1/etc/hadoop:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/common/lib/*:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/common/*:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/hdfs:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/hdfs/lib/*:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/hdfs/*:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/yarn/lib/*:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/yarn/*:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/mapreduce/lib/*:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/mapreduce/*:/home/hadoop/apache/hadoop-2.4.1/contrib/capacity-scheduler/*.jar:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/yarn/*:/home/hadoop/apache/hadoop-2.4.1/share/hadoop/yarn/lib/*
container
使用: yarn container [options]
示例1:
[[email protected] bin]$ yarn container -list appattempt_1437364567082_0106_01
15/08/10 20:45:45 INFO client.RMProxy: Connecting to ResourceManager at hadoopcluster79/10.0.1.79:8032
Total number of containers :25
Container-Id Start Time Finish Time State Host LOG-URL
container_1437364567082_0106_01_000028 1439210458659 0 RUNNING hadoopcluster83:37140 //hadoopcluster83:8042/node/containerlogs/container_1437364567082_0106_01_000028/hadoop
container_1437364567082_0106_01_000016 1439210314436 0 RUNNING hadoopcluster84:43818 //hadoopcluster84:8042/node/containerlogs/container_1437364567082_0106_01_000016/hadoop
container_1437364567082_0106_01_000019 1439210338598 0 RUNNING hadoopcluster83:37140 //hadoopcluster83:8042/node/containerlogs/container_1437364567082_0106_01_000019/hadoop
container_1437364567082_0106_01_000004 1439210314130 0 RUNNING hadoopcluster82:48622 //hadoopcluster82:8042/node/containerlogs/container_1437364567082_0106_01_000004/hadoop
container_1437364567082_0106_01_000008 1439210314130 0 RUNNING hadoopcluster82:48622 //hadoopcluster82:8042/node/containerlogs/container_1437364567082_0106_01_000008/hadoop
container_1437364567082_0106_01_000031 1439210718604 0 RUNNING hadoopcluster83:37140 //hadoopcluster83:8042/node/containerlogs/container_1437364567082_0106_01_000031/hadoop
container_1437364567082_0106_01_000020 1439210339601 0 RUNNING hadoopcluster83:37140 //hadoopcluster83:8042/node/containerlogs/container_1437364567082_0106_01_000020/hadoop
container_1437364567082_0106_01_000005 1439210314130 0 RUNNING hadoopcluster82:48622 //hadoopcluster82:8042/node/containerlogs/container_1437364567082_0106_01_000005/hadoop
container_1437364567082_0106_01_000013 1439210314435 0 RUNNING hadoopcluster84:43818 //hadoopcluster84:8042/node/containerlogs/container_1437364567082_0106_01_000013/hadoop
container_1437364567082_0106_01_000022 1439210368679 0 RUNNING hadoopcluster84:43818 //hadoopcluster84:8042/node/containerlogs/container_1437364567082_0106_01_000022/hadoop
container_1437364567082_0106_01_000021 1439210353626 0 RUNNING hadoopcluster83:37140 //hadoopcluster83:8042/node/containerlogs/container_1437364567082_0106_01_000021/hadoop
container_1437364567082_0106_01_000014 1439210314435 0 RUNNING hadoopcluster84:43818 //hadoopcluster84:8042/node/containerlogs/container_1437364567082_0106_01_000014/hadoop
container_1437364567082_0106_01_000029 1439210473726 0 RUNNING hadoopcluster80:42366 //hadoopcluster80:8042/node/containerlogs/container_1437364567082_0106_01_000029/hadoop
container_1437364567082_0106_01_000006 1439210314130 0 RUNNING hadoopcluster82:48622 //hadoopcluster82:8042/node/containerlogs/container_1437364567082_0106_01_000006/hadoop
container_1437364567082_0106_01_000003 1439210314129 0 RUNNING hadoopcluster82:48622 //hadoopcluster82:8042/node/containerlogs/container_1437364567082_0106_01_000003/hadoop
container_1437364567082_0106_01_000015 1439210314436 0 RUNNING hadoopcluster84:43818 //hadoopcluster84:8042/node/containerlogs/container_1437364567082_0106_01_000015/hadoop
container_1437364567082_0106_01_000009 1439210314130 0 RUNNING hadoopcluster82:48622 //hadoopcluster82:8042/node/containerlogs/container_1437364567082_0106_01_000009/hadoop
container_1437364567082_0106_01_000030 1439210708467 0 RUNNING hadoopcluster83:37140 //hadoopcluster83:8042/node/containerlogs/container_1437364567082_0106_01_000030/hadoop
container_1437364567082_0106_01_000012 1439210314435 0 RUNNING hadoopcluster84:43818 //hadoopcluster84:8042/node/containerlogs/container_1437364567082_0106_01_000012/hadoop
container_1437364567082_0106_01_000027 1439210444354 0 RUNNING hadoopcluster84:43818 //hadoopcluster84:8042/node/containerlogs/container_1437364567082_0106_01_000027/hadoop
container_1437364567082_0106_01_000026 1439210428514 0 RUNNING hadoopcluster83:37140 //hadoopcluster83:8042/node/containerlogs/container_1437364567082_0106_01_000026/hadoop
container_1437364567082_0106_01_000017 1439210314436 0 RUNNING hadoopcluster84:43818 //hadoopcluster84:8042/node/containerlogs/container_1437364567082_0106_01_000017/hadoop
container_1437364567082_0106_01_000001 1439210306902 0 RUNNING hadoopcluster80:42366 //hadoopcluster80:8042/node/containerlogs/container_1437364567082_0106_01_000001/hadoop
container_1437364567082_0106_01_000002 1439210314129 0 RUNNING hadoopcluster82:48622 //hadoopcluster82:8042/node/containerlogs/container_1437364567082_0106_01_000002/hadoop
container_1437364567082_0106_01_000025 1439210414171 0 RUNNING hadoopcluster83:37140 //hadoopcluster83:8042/node/containerlogs/container_1437364567082_0106_01_000025/hadoop
示例2:
[[email protected] bin]$ yarn container -status container_1437364567082_0105_01_000020
15/08/10 20:28:00 INFO client.RMProxy: Connecting to ResourceManager at hadoopcluster79/10.0.1.79:8032
Container Report :
Container-Id : container_1437364567082_0105_01_000020
Start-Time : 1439208779842
Finish-Time : 0
State : RUNNING
LOG-URL : //hadoopcluster83:8042/node/containerlogs/container_1437364567082_0105_01_000020/hadoop
Host : hadoopcluster83:37140
Diagnostics : null
jar使用: yarn jar <jar> [mainClass] args...
執行jar檔案,使用者可以將寫好的YARN程式碼打包成jar檔案,用這個命令去執行它。
logs
使用: yarn logs -applicationId <application ID> [options]
注:應用程式沒有完成,該命令是不能列印日誌的。
示例:
[[email protected] bin]$ yarn logs -applicationId application_1437364567082_0104 -appOwner hadoop
15/08/10 17:59:19 INFO client.RMProxy: Connecting to ResourceManager at hadoopcluster79/10.0.1.79:8032
Container: container_1437364567082_0104_01_000003 on hadoopcluster82_48622
============================================================================
LogType: stderr
LogLength: 0
Log Contents:
LogType: stdout
LogLength: 0
Log Contents:
LogType: syslog
LogLength: 3673
Log Contents:
2015-08-10 17:24:01,565 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2015-08-10 17:24:01,580 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
。。。。。。此處省略N萬個字元
// 下面的命令,根據APP的所有者檢視LOG日誌,因為application_1437364567082_0104任務我是用hadoop使用者啟動的,所以列印的是如下資訊:
[[email protected] bin]$ yarn logs -applicationId application_1437364567082_0104 -appOwner root
15/08/10 17:59:25 INFO client.RMProxy: Connecting to ResourceManager at hadoopcluster79/10.0.1.79:8032
Logs not available at /tmp/logs/root/logs/application_1437364567082_0104
Log aggregation has not completed or is not enabled.
node
使用: yarn node [options]
[[email protected] bin]$ ./yarn node -list -all
15/08/10 17:34:17 INFO client.RMProxy: Connecting to ResourceManager at hadoopcluster79/10.0.1.79:8032
Total Nodes:4
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
hadoopcluster82:48622 RUNNING hadoopcluster82:8042 0
hadoopcluster84:43818 RUNNING hadoopcluster84:8042 0
hadoopcluster83:37140 RUNNING hadoopcluster83:8042 0
hadoopcluster80:42366 RUNNING hadoopcluster80:8042 0
示例2:
[[email protected] bin]$ ./yarn node -list -states RUNNING
15/08/10 17:39:55 INFO client.RMProxy: Connecting to ResourceManager at hadoopcluster79/10.0.1.79:8032
Total Nodes:4
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
hadoopcluster82:48622 RUNNING hadoopcluster82:8042 0
hadoopcluster84:43818 RUNNING hadoopcluster84:8042 0
hadoopcluster83:37140 RUNNING hadoopcluster83:8042 0
hadoopcluster80:42366 RUNNING hadoopcluster80:8042 0
示例3:
[[email protected] bin]$ ./yarn node -status hadoopcluster82:48622
15/08/10 17:52:52 INFO client.RMProxy: Connecting to ResourceManager at hadoopcluster79/10.0.1.79:8032
Node Report :
Node-Id : hadoopcluster82:48622
Rack : /default-rack
Node-State : RUNNING
Node-Http-Address : hadoopcluster82:8042
Last-Health-Update : 星期一 10/八月/15 05:52:09:601CST
Health-Report :
Containers : 0
Memory-Used : 0MB
Memory-Capacity : 10240MB
CPU-Used : 0 vcores
CPU-Capacity : 8 vcores
列印節點的報告。
queue
使用: yarn queue [options]
version
使用: yarn version
列印hadoop的版本。
管理員命令:
下列這些命令對hadoop叢集的管理員是非常有用的。
daemonlog使用:
yarn daemonlog -getlevel <host:httpport> <classname> yarn daemonlog -setlevel <host:httpport> <classname> <level>
示例1:
[[email protected] ~]# hadoop daemonlog -getlevel hadoopcluster82:50075 org.apache.hadoop.hdfs.server.datanode.DataNode
Connecting to http://hadoopcluster82:50075/logLevel?log=org.apache.hadoop.hdfs.server.datanode.DataNode
Submitted Log Name: org.apache.hadoop.hdfs.server.datanode.DataNode
Log Class: org.apache.commons.logging.impl.Log4JLogger
Effective level: INFO
[[email protected] ~]# yarn daemonlog -getlevel hadoopcluster79:8088 org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl
Connecting to http://hadoopcluster79:8088/logLevel?log=org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl
Submitted Log Name: org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl
Log Class: org.apache.commons.logging.impl.Log4JLogger
Effective level: INFO
[[email protected] ~]# yarn daemonlog -getlevel hadoopcluster78:19888 org.apache.hadoop.mapreduce.v2.hs.JobHistory
Connecting to http://hadoopcluster78:19888/logLevel?log=org.apache.hadoop.mapreduce.v2.hs.JobHistory
Submitted Log Name: org.apache.hadoop.mapreduce.v2.hs.JobHistory
Log Class: org.apache.commons.logging.impl.Log4JLogger
Effective level: INFO
nodemanager使用: yarn nodemanager
啟動NodeManager
proxyserver
使用: yarn proxyserver
啟動web proxy server
resourcemanager
使用: yarn resourcemanager [-format-state-store]
rmadmin
使用:
yarn rmadmin [-refreshQueues] [-refreshNodes] [-refreshUserToGroupsMapping] [-refreshSuperUserGroupsConfiguration] [-refreshAdminAcls] [-refreshServiceAcl] [-getGroups [username]] [-transitionToActive [--forceactive] [--forcemanual] <serviceId>] [-transitionToStandby [--forcemanual] <serviceId>] [-failover [--forcefence] [--forceactive] <serviceId1> <serviceId2>] [-getServiceState <serviceId>] [-checkHealth <serviceId>] [-help [cmd]]
scmadmin
相關推薦
hadoop job yarn 命令
hadoop命令列 與job相關的:命令列工具 • 1.檢視 Job 資訊:hadoop job -list 2.殺掉 Job: hadoop job –kill job_id3.指定路徑下檢視歷史日誌彙總:hadoop job -history output-dir 4.作業的更多細節: hadoop
Hadoop之YARN命令
概述 YARN命令是呼叫bin/yarn指令碼檔案,如果執行yarn指令碼沒有帶任何引數,則會列印yarn所有命令的描述。 使用: yarn [--config confdir] COMMAND [--loglevel loglevel] [GENERIC_OPTIONS]
Hadoop記錄-Yarn命令
eric 獲取 lan borde psc standby group ren cls 概述 YARN命令是調用bin/yarn腳本文件,如果運行yarn腳本沒有帶任何參數,則會打印yarn所有命令的描述。 使用: yarn [--config confdir] COM
介紹hadoop中的hadoop和hdfs命令
命令行 註意 property 密碼 編輯 format gety node job 有些hive安裝文檔提到了hdfs dfs -mkdir ,也就是說hdfs也是可以用的,但在2.8.0中已經不那麽處理了,之所以還可以使用,是為了向下兼容. 本文簡要介紹一下有關的命令,
HADOOP基本操作命令
itl mapred lang reduce 磁盤 family home file cas Hadoop基本操作命令 在這篇文章中,我們默認認為Hadoop環境已經由運維人員配置好直接可以使用。假設Hadoop的安裝目錄HADOOP_HOME為/home/admin/ha
2.淺析Hadoop之YARN
返回 ica 組件 任務管理 管理者 節點 container 狀態 nod YARN也是主從架構,主節點是ResourceManager,從節點是NodeManager,是一種資源分配及任務管理的組件 針對每個任務還有ApplicationMaster應用管理者和Cont
Linux配置Hadoop 常用的命令
roo scripts lin etc 環境 主機 eth sts fig uname -a 看主機位數 ip a 看IP地址 vi /etc/sysconfig/network 改主機的名字 vi /etc/hosts 改映射關系 vi /etc/sysconfig/ne
搭建部署Hadoop 之Yarn
.lib get allow component 分享 marshal red err 申請 Yarn 集群資源管理系統Yarn 角色及概念?Yarn 是 Hadoop 的一個通用的資源管理系統? Yarn 角色 – Resourcemanager – Node
Yarn命令使用及wordcount解析
Hadoop Yarn 前言: 前面幾篇博客主要介紹了MapReduce與Yarn的架構設計及簡單工作流程,本篇文章將以wordcount程序為例,簡單介紹下Yarn的使用。 1.wordcount示例運行 [root@hadoop000 ~]# su - hadoop [hadoop@hadoo
hadoop(9)---yarn配置文件說明
檢查 erp 兩臺 尋找 cti 存儲方式 發送 -a try 以下只是對yarn配置文件(yarn.site.xml)簡單的一個配置 <configuration><!-- rm失聯後重新鏈接的時間 --> <property>
hadoop的一些命令技巧
python 當前 fsp local -c dfs div true 保存 hadoop fs -cat <hdfspath> hadoop fs -cat <hdfspath>|more #more參數可是分頁顯示文件內容 echo a
分散式系統詳解--框架(Hadoop-基本shell命令)
分散式系統詳解--框架(Hadoop-基本shell命令) 前面的文章我們已經將一個叢集搭建好了,現在就需要知道一些關於hadoo
hadoop[3]-shell命令操作
hdfs的shell操作: 大致介紹(http://hadoop.apache.org/docs/r1.0.4/cn/hdfs_user_guide.html#Shell%E5%91%BD%E4%BB%A4)操作檔案命令格式格式: hadoop fs 操作命令 引數 bin/hadoop fs
大資料開發之Hadoop篇----jps命令的剖析
我們在大資料的日常生產當中會經常使用到jps命令,如果問起很多人他們都會知道jps命令是用來幹什麼的,檢視java相關的程序。但是這個命令是屬於哪個元件提供的呢?最起碼可以肯定不是linux系統自帶的。 jps是屬於jdk自帶的命令,當你機器安裝了jdk同時將jdk配置到系統的環境變數當中後,在
大資料開發之Hadoop篇----YARN設計架構
1,Yarn架構設計 在hadoop當中無論是hdfs還是yarn都是服從主從設計的架構的,就是一個主節點用於管理整個叢集,而一堆的從節點就是實際工作者了。而在yarn當中,主節點程序為ResourceManager,從節點程序為NodeManager。我們簡單回顧一下提交作業到yarn上面的流
Yarn命令列表
常用命令: 建立專案:yarn init 安裝依賴包:yarn == yarn install 新增依賴包:yarn add Yarn命令列表 命令 操作 引數 標籤 yarn add 新增依賴包
yarn命令的使用說明及.yarnrc使用等
本文主要參考csdn作者mjzhang1993的文章,原文戳這裡,還有一點點自己的總結,感謝原作者,學到了。很全忍不住想在自己部落格留一份。 yarn/npm 命令 概述 通過 yarn add 新增依賴會更新 package.json 以及 yarn.lock 檔案 yar
Hadoop 檔案部分命令(檔案行數)
1.hdfs下載資料夾中多個檔案 hadoop fs -get /目錄 目錄 結果是輸出到本地的資料夾之中 2.多個檔案合併後輸出到一個檔案之中 hadoop fs -getmerge filePath localPath/data.dat 3.統計多個
hadoop系統 hdfs 命令列操作
轉自:https://blog.csdn.net/sjhuangx/article/details/79796388 Hadoop檔案系統shell命令列表: https://hadoop.apache.org/docs/current/hadoop-project-dist/hado
【Hadoop】yarn的資源排程
yarn的資源排程 yarn的資源排程 前言 三種主要排程器 排程策略對比 yarn的資源排程 前言 Hadoop作為分散式計算平臺,從叢集計算的角度分析,Hadoop可以將底層的計算資源整合後統