1. 程式人生 > >風控專案異常記錄:

風控專案異常記錄:

1.java.lang.NoClassDefFoundError: org/apache/spark/api/java/function/Function0
    at java.lang.Class.getDeclaredMethods0(Native Method)
    at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
    at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
    at java.lang.Class.getMethod0(Class.java:3018)
    at java.lang.Class.getMethod(Class.java:1784)
    at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
    at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.api.java.function.Function0
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 7 more
Disconnected from the target VM, address: '127.0.0.1:52278', transport: 'socket'
Error: A JNI error has occurred, please check your installation and try again

解決辦法:

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming_2.11</artifactId>
    <version>2.1.1</version>
   <scope>provided</scope>
</dependency>
因為依賴中設定了<scope>provied</scope>,provied是在編譯和測試時有效,將provied改為compile或者刪除掉即可
現在主要來說明<scope>值的作用範圍:
compile:預設值,適用於所有階段(表明該jar包在編譯、執行以及測試中路徑俊可見),並且會隨著專案直接釋出。
provided:編譯和測試時有效,並且該jar包在執行時由伺服器提供。如servlet-api.
runtime:執行時使用,對測試和執行有效。如jdbc.
test:只在測試時使用,在編譯和執行時不起作用。釋出專案時沒有作用。
system:不依賴maven倉庫解析,需要提供依賴的顯式的置頂jar包路徑。對專案的移植來說是不方便的。

2.java.lang.ClassNotFoundException: org.apache.spark.streaming.kafka.KafkaUtils

原因也是因為新增進來的依賴<scope>provided</scope>

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
    <version>2.1.1</version>
    <scope>provided</scope>
</dependency>

解決辦法:<scope>provied</scope>

3.Reconnect due to error: java.lang.NoSuchMethodError: org.apache.kafka.common.network.NetworkSend.<init>(Ljava/lang/String;[Ljava/nio/ByteBuffer;)V

at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:41) ~[kafka_2.11-0.10.0.1.jar:?] at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:44) ~[kafka_2.11-0.10.0.1.jar:?] at kafka.network.BlockingChannel.send(BlockingChannel.scala:112) ~[kafka_2.11-0.10.0.1.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:85) [kafka_2.11-0.10.0.1.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) [kafka_2.11-0.10.0.1.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) [kafka_2.11-0.10.0.1.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) [kafka_2.11-0.10.0.1.jar:?]

原因是自己添加了kafka的依賴,估計和kafka-client發生了衝突,註釋點即可

<!--<dependency>-->
    <!--<groupId>org.apache.kafka</groupId>-->
    <!--<artifactId>kafka_2.11</artifactId>-->
    <!--<version>0.10.2.0</version>-->
<!--</dependency>-->

4.10-26 10:57:17[org.apache.hadoop.util.Shell-303][pool-19-thread-1][315990] - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
    at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
    at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
    at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
    at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:905)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:886)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:783)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:772)
    at org.apache.spark.streaming.CheckpointWriter$CheckpointWriteHandler.run(Checkpoint.scala:234)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

原因:本地未安裝“winutils.exe用於連線hadoop的工具。”

下載地址:https://github.com/srccodes/hadoop-common-2.2.0-bin

6.Exception in thread "pool-19-thread-5" java.lang.NullPointerException
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:404)
    at org.apache.hadoop.util.Shell.run(Shell.java:379)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:678)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
    at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:905)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:886)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:783)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:772)
    at org.apache.spark.streaming.CheckpointWriter$CheckpointWriteHandler.run(Checkpoint.scala:234)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

原因:windows本地未安裝hadoop,去官網下載hadoop2.7.3.tar.gz解壓後,配置HADOOP_HOME,配置ok後,把異常5中下載的hadoop-common-2.2.0-bin的hadoop連線工具中的hadoop.dll和winutils.exe 複製到hadoop的bin目錄下,最好在windows/System32下也copy一個hadoop.dl,重啟電腦,重新執行程式,異常解決

7.Scala建立KafkaUtils.createDirectStream報:(1)方式找不到,(2)需要提供R的引數型別

原因:因為此程式碼是直接Copy java 中的程式碼,IDEA欄位進行程式碼切換,在Scala中我準備使用的Scala的createDirectStream,剛copy過來的是Java 中調的createDirectStream,兩方式是過載方式,傳入的引數不一樣。而Copy過來的Map和Set方法不能轉換成Scala的Map和Set,而Scala中使用的createDirectStream方法是需要用Scala的Map和Set.

解決辦法:將copy過來的map和Set改成Scala中的就可以了

8.[JAVA異常]ERROR: JDWP Unable to get JNI 1.2 environment, jvm->GetEnv() return code = -2 JDWP exit erro

原因:
1.JDK1.8.1
2.上次啟動除錯的程式碼有錯誤,導致程序沒有終止,佔用了Console輸出,在之後啟動除錯的時候出現此種錯誤

解決方法:
1.在程式最後,main()函式中新增:System.exit(0);
  System.exit(0);會使程式立即被終止,程式中若有執行緒還在執行任務,後續的任務也就無法繼續執行。
2.kill掉後臺java經常,從新執行即可

9.Exception in thread "main" java.lang.ClassCastException: kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker

主要原因是kafka版本不匹配問題,因為叢集上有的時10的kafka,實際開發過程中使用的時0.8的kafka

之前的pom檔案:

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
    <version>2.1.1</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
    <version>2.1.1</version>
    <!--<scope>provided</scope>-->
</dependency>

將10和8的版本交換位置後,去掉了10中的<scope>provided</scope>問題解決

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
    <version>2.1.1</version>
    <!--<scope>provided</scope>-->
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
    <version>2.1.1</version>
    <!--<scope>provided</scope>-->
</dependency>