1. 程式人生 > >spark spark-shell java.lang.NoClassDefFoundError: parquet/hadoop/ParquetOutputCommitter

spark spark-shell java.lang.NoClassDefFoundError: parquet/hadoop/ParquetOutputCommitter

spark版本:


報錯:



Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath
        
18/03/01 11:36:50 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/home/hadoop/spark/lib/jackson-annotations-2.2.3.jar:/home/hadoop/spark/lib/jackson-core-2.2.3.jar:/home/hadoop/spark/lib/jackson-databind-2.2.3.jar:/home/hadoop/spark/lib/jackson-jaxrs-1.9.2.jar:/home/hadoop/spark/lib/jackson-xc-1.9.2.jar' as a work-around.
18/03/01 11:36:50 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/home/hadoop/spark/lib/jackson-annotations-2.2.3.jar:/home/hadoop/spark/lib/jackson-core-2.2.3.jar:/home/hadoop/spark/lib/jackson-databind-2.2.3.jar:/home/hadoop/spark/lib/jackson-jaxrs-1.9.2.jar:/home/hadoop/spark/lib/jackson-xc-1.9.2.jar' as a work-around.
Spark context available as sc (master = yarn-client, app id = application_1519812925694_0007).
java.lang.NoClassDefFoundError: parquet/hadoop/ParquetOutputCommitter
at org.apache.spark.sql.SQLConf$.<init>(SQLConf.scala:319)
at org.apache.spark.sql.SQLConf$.<clinit>(SQLConf.scala)
at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:85)
at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:77)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1038)
at $iwC$$iwC.<init>(<console>:15)
at $iwC.<init>(<console>:24)
at <init>(<console>:26)
at .<init>(<console>:30)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:133)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:305)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:160)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1064)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: parquet.hadoop.ParquetOutputCommitter
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

... 52 more

解決:

下載 parquet-hadoop-1.4.3.jar  加入到 spark 的classpath 裡面

spark conf目錄下  spark-defaults.conf



spark.driver.extraClassPath/home/hadoop/spark/lib/jackson-annotations-2.2.3.jar:/home/hadoop/spark/lib/jackson-core-2.2.3.jar:/home/hadoop/spark/lib/jackson-databind-2.2.3.jar:/home/hadoop/spark/lib/jackson-jaxrs-1.9.2.jar:/home/hadoop/spark/lib/jackson-xc-1.9.2.jar:/home/hadoop/spark/lib/parquet-hadoop-1.4.3.jar

相關推薦

spark spark-shell java.lang.NoClassDefFoundError: parquet/hadoop/ParquetOutputCommitter

spark版本:報錯:Please instead use: - ./spark-submit with --driver-class-path to augment the driver classpath - spark.executor.extraClassPath t

Spark 啟動 java.lang.ClassNotFoundException: parquet.hadoop.ParquetOutputCommitter

Spark啟動報 java.lang.ClassNotFoundException: parquet.hadoop.ParquetOutputCommitter 我安裝的是hadoop-2.6.0-c

問題定位分享(9)oozie提交spark任務報 java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/KafkaProducer

oozie中支援很多的action型別,比如spark、hive,對應的標籤為: <spark xmlns="uri:oozie:spark-action:0.1">  ... oozie中sharelib用於存放每個action型別需要的依賴,可以檢視當前所有的acti

idea中使用scala運行spark出現Exception in thread "main" java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class

lib apache brush inf under tle 配置 erro cal idea中使用scala運行spark出現:    Exception in thread "main" java.lang.NoClassDefFoundError: scala/co

spark submit提交任務報錯Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/stream

1.問題描述 提交spark任務: bin/spark-submit --master local[2] \ --class _0924MoocProject.ImoocStatStreamingApp_product \ /opt/datas/project/scala

Spark升級到2.0後測試stream-kafka測試報java.lang.NoClassDefFoundError: org/apache/spark/Logging錯誤

- 最近從Spark 1.5.2升級到2.0之後,執行測試程式碼spark-stream-kafka報以下錯誤: java.lang.NoClassDefFoundError: org/apache/spark/Logging at java.lang.ClassLo

Spark 啟動java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/Module 報錯

之前一直沒有搞過Spark,這陣有時間,今天在搭建的過程中在啟動Spark的時候發現了報錯,之前以為是不是有什麼配置自己沒有搞清楚,最後搜尋半天還是沒有找到,自己試著找了一下包,Ok成功啟動,在這裡記下,避免其他人在這塊費時間 下面是異常資訊,很明顯是缺少類,

Spark-HBase集成錯誤之 java.lang.NoClassDefFoundError: org/htrace/Trace

獲取 nod loader cal adc 標註 launcher targe gin 在進行Spark與HBase 集成的過程中遇到以下問題: java.lang.IllegalArgumentException: Error while instantiati

Spark 執行問題 java.lang.NoSuchMethodError: scala.Predef 解決方案

idea中如果遇到這種問題,一般查詢和spark匹配的scala版本就能解決 如果不能解決 請開啟專案的iml檔案,去掉不同版本的scala的orderEntry就能解決。 另在mac中通常會有問題no snappyjava in java.library.path 解決方案如下 1.

Hadoop-異常-Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/avro/io/DatumReader

 //maven org.apache.avr  下載不完全 ,去maven    If you are using maven to build your jar, you need to add the following depende

【HBase】Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Fil

【問題描述】 在使用bulkload方式向HBase匯入資料的時候遇到了如下的錯誤: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Filter at

hive啟動報錯 java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf

bin/hive Exception in thread “main” java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf 這裡分享一種可能 到hadoop的etc/hadoo

spark報錯java.lang.OutOfMemoryError: Java heap space

針對spark報錯: java.lang.OutOfMemoryError: Java heap space 解決方式:     在spark/conf/spark-env.sh中加大SPARK_WORKER_MEMORY值,如下,我加大至6GB export SPAR

Spark 執行出現java.lang.OutOfMemoryError: Java heap space

具體錯誤如截圖: 主要就是java記憶體溢位。 之前嘗試過很多方法:/conf中設定spark-java-opts 等,都沒有解決問題。其實問題就是JVM在執行時記憶體不夠導致。可以通過命令: ./spark-submit --help 可以看到

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/CanUnbuffer

在執行spark on hive 的時候在  sql.show()處報錯 : Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/CanUnbuffer 報錯詳情: Exceptio

hadoop解決Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/util/Apps

linux+eclipse+本地執行WordCount丟擲下面異常: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/util/Apps。 解決:沒有把yar

spark.driver.maxResultSize || java.lang.OutOfMemoryError

16/03/11 12:05:56 ERROR TaskSetManager: Total size of serialized results of 4 tasks (1800.7 MB) is bigger than spark.driver.maxResu

問題: java.lang.NoClassDefFoundError:org/apache/hadoop/mapred/InputSplitWithLocationInfo

問題描述:Exception in thread “dag-scheduler-event-loop” java.lang.NoClassDefFoundError: org/apache/hadoop/mapred/InputSplitWithLocationInfo 缺少

HBase出現java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration問題

問題:Hbase在叢集上執行報錯:NoClassDefFoundError:org/apache/hadoop/hbase/HBaseConfiguration 需求:HBase使用Java建立表,打包成jar,提交到叢集上行執行! 在IDEA中使用Mave

HBase MapReduce 解決java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/...

在使用MapReduce 和HBase結合時候,在執行程式的時候,會出現 java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/xxx錯誤,原