Spark DataFrame ORC Hive表读取问题 [英] Spark DataFrame ORC Hive table reading issue

查看:682
本文介绍了Spark DataFrame ORC Hive表读取问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在Spark中读取Hive表.以下是配置单元表格式:

I am trying to read a Hive table in Spark. Below is the Hive Table format:

# Storage Information       
SerDe Library:  org.apache.hadoop.hive.ql.io.orc.OrcSerde   
InputFormat:    org.apache.hadoop.hive.ql.io.orc.OrcInputFormat 
OutputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat    
Compressed: No  
Num Buckets:    -1  
Bucket Columns: []  
Sort Columns:   []  
Storage Desc Params:        
    field.delim \u0001
    serialization.format    \u0001

当我尝试通过以下命令使用Spark SQL读取它时:

When I am trying to read it using the Spark SQL with the below command:

val c = hiveContext.sql("""select  
        a
    from c_db.c cs 
    where dt >=  '2016-05-12' """)
c. show

我收到以下警告:-

18/07/02 18:02:02 WARN ReaderImpl:找不到以下字段:_col0中的a, _col1,_col2,_col3,_col4,_col5,_col6,_col7,_col8,_col9,_col10,_col11,_col12,_col13,_col14,_col15,_col16,_col17,_col18,_col19,_col19,_col19,_col19, _col26,_col27,_col28,_col29,_col30,_col31,_col32,_col33,_col34,_col35,_col36,_col37,_col38,_col39,_col40,_col41,_col42,_col43,_col 40,_col44,_col44,_col44,_col44,_col44,_col44,_col44,_col44,_col44, _col51,_col52,_col53,_col54,_col55,_col56,_col57,_col58,_col59,_col60,_col61,_col62,_col63,_col64,_col65,_col66,_col67,

18/07/02 18:02:02 WARN ReaderImpl: Cannot find field for: a in _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col17, _col18, _col19, _col20, _col21, _col22, _col23, _col24, _col25, _col26, _col27, _col28, _col29, _col30, _col31, _col32, _col33, _col34, _col35, _col36, _col37, _col38, _col39, _col40, _col41, _col42, _col43, _col44, _col45, _col46, _col47, _col48, _col49, _col50, _col51, _col52, _col53, _col54, _col55, _col56, _col57, _col58, _col59, _col60, _col61, _col62, _col63, _col64, _col65, _col66, _col67,

读取开始,但是速度很慢,并且网络超时.

The read starts but it is very slow and getting network time out.

当我尝试直接读取Hive表目录时,出现以下错误.

When i am trying to read the Hive table directory directly i am getting the below error.

val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
hiveContext.setConf("spark.sql.orc.filterPushdown", "true") 
val c = hiveContext.read.format("orc").load("/a/warehouse/c_db.db/c")
c.select("a").show()

org.apache.spark.sql.AnalysisException:无法解析给定输入的"a" 列:[_ col18,_col3,_col8,_col66,_col45,_col42,_col31, _col17,_col52,_col58,_col50,_col26,_col63,_col12,_col27,_col23,_col6,_col28,_col54,_col48,_col33,_col56,_col22,_col35,_col44,_col67,_col67,_col67, _col2,_col25,_col24,_col64,_col40,_col34,_col61,_col49,_col14,_col13,_col19,_col43,_col65,_col29,_col10,_col7,_col21,_col39,_col,4, trans_dt,_col57,_col16,_col36,_col38,_col59,_col1,_col37,_col55,_col51,_col60,_col53]; 在org.apache.spark.sql.catalyst.analysis.package $ AnalysisErrorAt.failAnalysis(package.scala:42)

org.apache.spark.sql.AnalysisException: cannot resolve 'a' given input columns: [_col18, _col3, _col8, _col66, _col45, _col42, _col31, _col17, _col52, _col58, _col50, _col26, _col63, _col12, _col27, _col23, _col6, _col28, _col54, _col48, _col33, _col56, _col22, _col35, _col44, _col67, _col15, _col32, _col9, _col11, _col41, _col20, _col2, _col25, _col24, _col64, _col40, _col34, _col61, _col49, _col14, _col13, _col19, _col43, _col65, _col29, _col10, _col7, _col21, _col39, _col46, _col4, _col5, _col62, _col0, _col30, _col47, trans_dt, _col57, _col16, _col36, _col38, _col59, _col1, _col37, _col55, _col51, _col60, _col53]; at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)

我可以将Hive表转换为TextInputFormat,但这应该是我的最后一个选择,因为我想获得OrcInputFormat压缩表大小的好处.

I can convert the Hive table to TextInputFormat but that should be my last option as i would like to get the benefit of OrcInputFormat to compress the table size.

真的很感谢您的建议.

推荐答案

我发现以这种方式读取表的方法:

I found workaround with reading table such way:

val schema = spark.table("db.name").schema

spark.read.schema(schema).orc("/path/to/table")

这篇关于Spark DataFrame ORC Hive表读取问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆