pyspark連hbase報org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter
阿新 • • 發佈:2018-12-31
ERROR python.Converter: Failed to load converter: org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/var/lib/spark/cspark/python/pyspark/context.py", line 678, in newAPIHadoopRDD jconf, batchSize) File "/var/lib/spark/cspark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__ File "/var/lib/spark/cspark/python/pyspark/sql/utils.py", line 63, in deco return f(*a, **kw) File "/var/lib/spark/cspark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD. : java.lang.ClassNotFoundException: org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.spark.util.Utils$.classForName(Utils.scala:229) at org.apache.spark.api.python.Converter$$anonfun$getInstance$1$$anonfun$1.apply(PythonHadoopUtil.scala:46) at org.apache.spark.api.python.Converter$$anonfun$getInstance$1$$anonfun$1.apply(PythonHadoopUtil.scala:45) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.api.python.Converter$$anonfun$getInstance$1.apply(PythonHadoopUtil.scala:45) at org.apache.spark.api.python.Converter$$anonfun$getInstance$1.apply(PythonHadoopUtil.scala:44) at scala.Option.map(Option.scala:146) at org.apache.spark.api.python.Converter$.getInstance(PythonHadoopUtil.scala:44) at org.apache.spark.api.python.PythonRDD$.getKeyValueConverters(PythonRDD.scala:743) at org.apache.spark.api.python.PythonRDD$.convertRDD(PythonRDD.scala:756) at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:580) at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:280) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:745)
解決辦法:
在Spark 2.0版本上缺少相關把hbase的資料轉換python可讀取的jar包,需要我們另行下載。
下載jar包spark-examples_2.11-1.6.0-typesafe-001.jar(https://mvnrepository.com/artifact/org.apache.spark/spark-examples_2.11/1.6.0-typesafe-001),然後在你的spark安裝目錄下建立目錄存放這個jar
執行如下命令
1: mkdir /var/lib/spark/jars/hbase/
2:rz命令上傳剛才下載的jar包(現在最新版本是spark-examples_2.11-1.6.0-typesafe-001.jar)
3:進入spark的conf目錄下修改spark-env.sh新增:
export SPARK_DIST_CLASSPATH=$(/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/bin/hadoop classpath):$(/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/bin/hbase classpath):/var/lib/spark/jars/hbase/*
/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/bin/hadoop:改成自己的hadoop安裝目錄
/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/bin/hbase:改成自己的hbase安裝目錄
最後重啟下hbase,ok了