全局启用spark.sql的区分大小写 [英] Enable case sensitivity for spark.sql globally
问题描述
选项spark.sql.caseSensitive
控制列名等是否区分大小写.可以设置为由
spark_session.sql('set spark.sql.caseSensitive=true')
,默认为false
.
似乎不可能通过
spark.sql.caseSensitive: True
在$SPARK_HOME/conf/spark-defaults.conf
中全局启用它
spark.sql.caseSensitive: True
虽然. 是目的还是有其他文件来设置sql选项?
也在解决方案
原来是设置
spark.sql.caseSensitive: True
毕竟可以工作.只需在Spark驱动程序的配置中完成它,而不是主服务器或工作程序.显然我上次尝试时忘记了这一点. The option and is It does not seem to be possible to enable it globally in though.
Is that intended or is there some other file to set sql options? Also in the source it is stated that it is highly discouraged to enable this at all. What is the rationale behind that advice? As it turns out setting in 这篇关于全局启用spark.sql的区分大小写的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!spark.sql.caseSensitive
controls whether column names etc should be case sensitive or not. It can be set e.g. byspark_session.sql('set spark.sql.caseSensitive=true')
false
per default.$SPARK_HOME/conf/spark-defaults.conf
withspark.sql.caseSensitive: True
spark.sql.caseSensitive: True
$SPARK_HOME/conf/spark-defaults.conf
DOES work after all. It just has to be done in the configuration of the Spark driver as well, not the master or workers. Apparently I forgot that when I last tried.