Hadoop CapacitySchedule配置
阿新 • • 發佈:2017-10-25
cat 包含 def schedule fault cto cnblogs script ant
下面是Hadoop中CapacitySchedule配置,包含了新建隊列和子隊列
1 <configuration> 2 3 <property> 4 <name>yarn.scheduler.capacity.maximum-am-resource-percent</name> 5 <value>0.2</value> 6 </property> 7 8 <property> 9 <name>yarn.scheduler.capacity.maximum-applications</name> 10 <value>10000</value> 11 </property> 12 13 <property> 14 <name>yarn.scheduler.capacity.node-locality-delay</name> 15 <value>40</value> 16 </property>17 18 <property> 19 <name>yarn.scheduler.capacity.queue-mappings-override.enable</name> 20 <value>false</value> 21 </property> 22 23 <property> 24 <name>yarn.scheduler.capacity.resource-calculator</name>25 <value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value> 26 </property> 27 28 <property> 29 <name>yarn.scheduler.capacity.root.accessible-node-labels</name> 30 <value>*</value> 31 <description></description> 32 </property> 33 34 <property> 35 <name>yarn.scheduler.capacity.root.acl_administer_queue</name> 36 <value>*</value> 37 <description></description> 38 </property> 39 40 <property> 41 <name>yarn.scheduler.capacity.root.capacity</name> 42 <value>100</value> 43 </property> 44 45 <property> 46 <name>yarn.scheduler.capacity.root.default.acl_submit_applications</name> 47 <value>*</value> 48 </property> 49 50 <property> 51 <name>yarn.scheduler.capacity.root.default.capacity</name> 52 <value>50</value> 53 </property> 54 55 <property> 56 <name>yarn.scheduler.capacity.root.default.maximum-capacity</name> 57 <value>70</value> 58 </property> 59 60 <property> 61 <name>yarn.scheduler.capacity.root.default.state</name> 62 <value>RUNNING</value> 63 </property> 64 65 <property> 66 <name>yarn.scheduler.capacity.root.default.user-limit-factor</name> 67 <value>1</value> 68 </property> 69 70 <property> 71 <name>yarn.scheduler.capacity.root.queues</name> 72 <value>default,spark,hadoop</value> 73 </property> 74 75 <property> 76 <name>yarn.scheduler.capacity.root.spark.acl_administer_queue</name> 77 <value>*</value> 78 </property> 79 80 <property> 81 <name>yarn.scheduler.capacity.root.spark.acl_submit_applications</name> 82 <value>*</value> 83 </property> 84 85 <property> 86 <name>yarn.scheduler.capacity.root.spark.capacity</name> 87 <value>30</value> 88 </property> 89 90 <property> 91 <name>yarn.scheduler.capacity.root.spark.maximum-capacity</name> 92 <value>70</value> 93 </property> 94 95 <property> 96 <name>yarn.scheduler.capacity.root.spark.minimum-user-limit-percent</name> 97 <value>100</value> 98 </property> 99 100 <property> 101 <name>yarn.scheduler.capacity.root.spark.ordering-policy</name> 102 <value>fifo</value> 103 </property> 104 105 <property> 106 <name>yarn.scheduler.capacity.root.spark.state</name> 107 <value>RUNNING</value> 108 </property> 109 110 <property> 111 <name>yarn.scheduler.capacity.root.spark.user-limit-factor</name> 112 <value>1</value> 113 </property> 114 115 <property> 116 <name>yarn.scheduler.capacity.root.hadoop.acl_administer_queue</name> 117 <value>*</value> 118 </property> 119 120 <property> 121 <name>yarn.scheduler.capacity.root.hadoop.acl_submit_applications</name> 122 <value>*</value> 123 </property> 124 125 <property> 126 <name>yarn.scheduler.capacity.root.hadoop.capacity</name> 127 <value>20</value> 128 </property> 129 130 <property> 131 <name>yarn.scheduler.capacity.root.hadoop.maximum-capacity</name> 132 <value>70</value> 133 </property> 134 135 <property> 136 <name>yarn.scheduler.capacity.root.hadoop.minimum-user-limit-percent</name> 137 <value>100</value> 138 </property> 139 140 <property> 141 <name>yarn.scheduler.capacity.root.hadoop.ordering-policy</name> 142 <value>fifo</value> 143 </property> 144 145 <property> 146 <name>yarn.scheduler.capacity.root.hadoop.state</name> 147 <value>RUNNING</value> 148 </property> 149 150 <property> 151 <name>yarn.scheduler.capacity.root.hadoop.user-limit-factor</name> 152 <value>1</value> 153 </property> 154 155 <property> 156 <name>yarn.scheduler.capacity.root.spark.queues</name> 157 <value>spark1,spark2</value> 158 </property> 159 160 <property> 161 <name>yarn.scheduler.capacity.root.spark.spark1.acl_administer_queue</name> 162 <value>*</value> 163 </property> 164 165 <property> 166 <name>yarn.scheduler.capacity.root.spark.spark1.acl_submit_applications</name> 167 <value>*</value> 168 </property> 169 170 <property> 171 <name>yarn.scheduler.capacity.root.spark.spark1.capacity</name> 172 <value>50</value> 173 </property> 174 175 <property> 176 <name>yarn.scheduler.capacity.root.spark.spark1.maximum-capacity</name> 177 <value>70</value> 178 </property> 179 180 <property> 181 <name>yarn.scheduler.capacity.root.spark.spark1.minimum-user-limit-percent</name> 182 <value>100</value> 183 </property> 184 185 <property> 186 <name>yarn.scheduler.capacity.root.spark.spark1.ordering-policy</name> 187 <value>fifo</value> 188 </property> 189 190 <property> 191 <name>yarn.scheduler.capacity.root.spark.spark1.state</name> 192 <value>RUNNING</value> 193 </property> 194 195 <property> 196 <name>yarn.scheduler.capacity.root.spark.spark1.user-limit-factor</name> 197 <value>1</value> 198 </property> 199 200 <property> 201 <name>yarn.scheduler.capacity.root.spark.spark2.acl_administer_queue</name> 202 <value>*</value> 203 </property> 204 205 <property> 206 <name>yarn.scheduler.capacity.root.spark.spark2.acl_submit_applications</name> 207 <value>*</value> 208 </property> 209 210 <property> 211 <name>yarn.scheduler.capacity.root.spark.spark2.capacity</name> 212 <value>50</value> 213 </property> 214 215 <property> 216 <name>yarn.scheduler.capacity.root.spark.spark2.maximum-capacity</name> 217 <value>70</value> 218 </property> 219 220 <property> 221 <name>yarn.scheduler.capacity.root.spark.spark2.minimum-user-limit-percent</name> 222 <value>100</value> 223 </property> 224 225 <property> 226 <name>yarn.scheduler.capacity.root.spark.spark2.ordering-policy</name> 227 <value>fifo</value> 228 </property> 229 230 <property> 231 <name>yarn.scheduler.capacity.root.spark.spark2.state</name> 232 <value>RUNNING</value> 233 </property> 234 235 <property> 236 <name>yarn.scheduler.capacity.root.spark.spark2.user-limit-factor</name> 237 <value>1</value> 238 </property> 239 </configuration>
Hadoop CapacitySchedule配置