忽略非火花配置属性:hive.exec.dynamic.partition.mode [英] Ignoring non-spark config property: hive.exec.dynamic.partition.mode
问题描述
如何使用hive.exec.dynamic.partition.mode=nonstrict
运行Spark-shell?
How to run a Spark-shell with hive.exec.dynamic.partition.mode=nonstrict
?
我尝试(如建议在此处)
export SPARK_MAJOR_VERSION=2; spark-shell --conf "hive.exec.dynamic.partition.mode=nonstrict" --properties-file /opt/_myPath_/sparkShell.conf'
但警告忽略非火花配置属性:hive.exec.dynamic.partition.mode = nonstrict"
but Warning "Ignoring non-spark config property: hive.exec.dynamic.partition.mode=nonstrict"
PS:使用Spark版本2.2.0.2.6.4.0-91,Scala版本2.11.8
PS: using Spark version 2.2.0.2.6.4.0-91, Scala version 2.11.8
需求在df.write.mode("overwrite").insertInto("db.partitionedTable")
上出现错误后到达,
The demand arrives after error on df.write.mode("overwrite").insertInto("db.partitionedTable")
,
org.apache.spark.SparkException:动态分区严格模式至少需要一个静态分区列.要关闭此设置,请设置hive.exec.dynamic.partition.mode = nonstrict
推荐答案
您可以尝试使用
You can try using spark.hadoop.*
prefix as suggested in Custom Spark Configuration section for version 2.3. Might work as well in 2.2 if it was just a doc bug :)
spark-shell \
--conf "spark.hadoop.hive.exec.dynamic.partition=true" \
--conf "spark.hadoop.hive.exec.dynamic.partition.mode=nonstrict" \
...
这篇关于忽略非火花配置属性:hive.exec.dynamic.partition.mode的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!