1. 程式人生 > 實用技巧 >org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source錯誤解決辦法

org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source錯誤解決辦法

解決辦法參考:

HDP3.1中spark2.3無法讀取Hive3.0資料

問題描述:ambari部署的spark和hive,在sparksql中執行insert into table xxx partition(dt='xxx') select xxx from xxx where dt='xxx',報錯如下錯誤

org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/.hive-staging_hive_2020-12-22_07-37-14_526_202796727754164477-1/-ext-10000 to destination hdfs:
//az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/dt=20201220; at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106) at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:
248) at org.apache.spark.sql.hiv e.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104) at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:
102) at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190) at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258) at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) ... 49 elided Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move source hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/.hive-staging_hive_2020-12-22_07-37-14_526_202796727754164477-1/-ext-10000 to destination hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/dt=20201220 at org.apache.hadoop.hive.ql.metadata.Hive.getHiveException(Hive.java:4303) at org.apache.hadoop.hive.ql.metadata.Hive.getHiveException(Hive.java:4258) at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:4253) at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:4620) at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:2132) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.sql.hive.client.Shim_v3_0.loadPartition(HiveShim.scala:1275) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:747) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:278) at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:216) at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:215) at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:261) at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:745) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97) ... 63 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: Load Data failed for hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/.hive-staging_hive_2020-12-22_07-37-14_526_202796727754164477-1/-ext-10000 as the file is not owned by hive and load data is also not ran as hive at org.apache.hadoop.hive.ql.metadata.Hive.needToCopy(Hive.java:4347) at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:4187) ... 82 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Load Data failed for hdfs://az-ccip-hadoop01.hdp:8020/warehouse/tablespace/managed/hive/ford.db/s_leads/.hive-staging_hive_2020-12-22_07-37-14_526_202796727754164477-1/-ext-10000 as the file is not owned by hive and load data is also not ran as hive at org.apache.hadoop.hive.ql.metadata.Hive.needToCopy(Hive.java:4338) ... 83 more

解決辦法:

修改metastore.catalog.default取值為hive,然後重啟spark2

<property>
      <name>metastore.catalog.default</name>
      <value>hive</value>
</property>