hadoop hdfs+mahout 整合做協同過濾時出現的兩個異常記錄
阿新 • • 發佈:2019-02-07
再說異常之前 先將專案的pom檔案配置發一下
<dependencies> <dependency> <groupId>org.apache.mahout</groupId> <artifactId>mahout-core</artifactId> <version>0.9</version> </dependency> <dependency> <groupId>org.apache.mahout</groupId> <artifactId>mahout-integration</artifactId> <version>0.9</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.9.0</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>1.7.2</version> </dependency> </dependencies>
異常1
org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4
經檢查,是因為在mahout-core依賴中,依賴了低版本的hdfs-core包,導致hadoop-client中依賴的2.9.0的版本被覆蓋。。。
所以我們只需要將mahout-core依賴的低版本hdfs-core移除即可。。
pom見下:
<dependency> <groupId>org.apache.mahout</groupId> <artifactId>mahout-core</artifactId> <version>0.9</version> <exclusions> <exclusion> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-core</artifactId> </exclusion> </exclusions> </dependency>
異常2
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme “hdfs”
解決辦法:
Configuration configuration= new Configuration();
configuration.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());