1. 程式人生 > 其它 >org.apache.spark.sql.AnalysisException: Can not create the managed table

org.apache.spark.sql.AnalysisException: Can not create the managed table

spark執行過程中偶發性出現錯誤。

Traceback (most recent call last):
  File "/dfs/data9/nm-local-dir/usercache/hadoop/appcache/application_1666879209698_29104/container_e26_1666879209698_29104_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/dfs/data9/nm-local-dir/usercache/hadoop/appcache/application_1666879209698_29104/container_e26_1666879209698_29104_01_000001/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o193.saveAsTable.
: org.apache.spark.sql.AnalysisException: Can not create the managed table('`xy_fx`.`mdl_uniswapv3_stat_protocol_hour_daytmp01`'). The associated location('hdfs://ns1/user/hive/warehouse/xy_fx.db/mdl_uniswapv3_stat_protocol_hour_daytmp01') already exists.;
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.validateTableLocation(SessionCatalog.scala:336)
	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:170)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)

最終找到問題原因:表已經刪除,但是hdfs目錄仍然存在,所以導致以上的報錯。
解決方法:spark增加以下配置引數

.set("spark.sql.legacy.allowCreatingManagedTableUsingNonemptyLocation","true")