Hadoop/Yarn/MapReduce記憶體分配(配置)方案
阿新 • • 發佈:2018-11-13
分享一下我老師大神的人工智慧教程!零基礎,通俗易懂!http://blog.csdn.net/jiangjunshow
也歡迎大家轉載本篇文章。分享知識,造福人民,實現我們中華民族偉大復興!
以horntonworks給出推薦配置為藍本,給出一種常見的Hadoop叢集上各元件的記憶體分配方案。方案最右側一欄是一個8G VM的分配方案,方案預留1-2G的記憶體給作業系統,分配4G給Yarn/MapReduce,當然也包括了HIVE,剩餘的2-3G是在需要使用HBase時預留給HBase的。Configuration File | Configuration Setting | Value Calculation | 8G VM (4G For MR) |
yarn-site.xml | yarn.nodemanager.resource.memory-mb | = containers * RAM-per-container | 4096 |
yarn-site.xml | yarn.scheduler.minimum-allocation-mb |
= RAM-per-container | 1024 |
yarn-site.xml | yarn.scheduler.maximum-allocation-mb | = containers * RAM-per-container | 4096 |
mapred-site.xml | mapreduce.map.memory.mb | = RAM-per-container | 1024 |
mapred-site.xml |
mapreduce.reduce.memory.mb | = 2 * RAM-per-container | 2048 |
mapred-site.xml | mapreduce.map.java.opts | = 0.8 * RAM-per-container | 819 |
mapred-site.xml | mapreduce.reduce.java.opts | = 0.8 * 2 * RAM-per-container | 1638 |
yarn-site.xml (check) | yarn.app.mapreduce.am.resource.mb | = 2 * RAM-per-container | 2048 |
yarn-site.xml (check) | yarn.app.mapreduce.am.command-opts | = 0.8 * 2 * RAM-per-container | 1638 |
tez-site.xml |
tez.am.resource.memory.mb |
= RAM-per-container |
1024 |
tez-site.xml |
tez.am.java.opts |
= 0.8 * RAM-per-container |
819 |
tez-site.xml |
hive.tez.container.size |
= RAM-per-container |
1024 |
tez-site.xml |
hive.tez.java.opts |
= 0.8 * RAM-per-container |
819 |