1. 程式人生 > >jmx監控spark executor配置

jmx監控spark executor配置

jmx監控spark比storm稍微有點繁瑣:

方法一、

首先在spark-defaults.conf中新增 ,但是8711埠不能重複,也就是說不能在一個節點上啟動兩個executor,或者埠衝突,沒有storm友好

 spark.executor.extraJavaOptions  -XX:+PrintGCDetails
 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
 -Dcom.sun.management.jmxremote.port=8711

然後配置metrics.properties:,我只對executor監控,你可以替換*
executor.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink
executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource

方法二:(好像有問題,只能監控SparkSubmit程序,不能監控CoarseGrainedExecutorBackend程序)

或者直接在spark-class檔案配置,當是每次啟動driver時候收到修改埠號,當然前提master和work已經啟動了,不然啟動會報埠錯誤

if [ -n "$SPARK_SUBMIT_BOOTSTRAP_DRIVER" ]; then
  # This is used only if the properties file actually contains these special configs
  # Export the environment variables needed by SparkSubmitDriverBootstrapper
 echo "hello111111111111111111111111111111111111"
  export RUNNER
  export CLASSPATH
  export JAVA_OPTS
  #export JAVA_OPTS="-XX:MaxPermSize=128m $OUR_JAVA_OPTS -Dcom.sun.management.jmxremote.port=8300 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
  export OUR_JAVA_MEM
  export SPARK_CLASS=1
  shift # Ignore main class (org.apache.spark.deploy.SparkSubmit) and use our own
  exec "$RUNNER" org.apache.spark.deploy.SparkSubmitDriverBootstrapper "
[email protected]
" else <span style="color:#FF6666;">export JAVA_OPTS="-XX:MaxPermSize=128m $OUR_JAVA_OPTS -Dcom.sun.management.jmxremote.port=8300  -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false" </span> # Note: The format of this command is closely echoed in SparkSubmitDriverBootstrapper.scala if [ -n "$SPARK_PRINT_LAUNCH_COMMAND" ]; then echo -n "Spark Command: " 1>&2 echo "$RUNNER" -cp "$CLASSPATH" $JAVA_OPTS "[email protected]" 1>&2 echo -e "========================================\n" 1>&2 fi exec "$RUNNER" -cp "$CLASSPATH" $JAVA_OPTS "[email protected]" fi