获取超过20行并在spark-shell中显示列的全部值 [英] fetch more than 20 rows and display full value of column in spark-shell
问题描述
我正在使用spark-shell中的CassandraSQLContext
从Cassandra查询数据.因此,我想知道两件事:一是如何使用CassandraSQLContext
获取20多个行,二是如何显示列的完整值.如您在下面看到的,默认情况下,它会在字符串值中附加点.
I am using CassandraSQLContext
from spark-shell to query data from Cassandra. So, I want to know two things one how to fetch more than 20 rows using CassandraSQLContext
and second how do Id display the full value of column. As you can see below by default it append dots in the string values.
代码:
val csc = new CassandraSQLContext(sc)
csc.setKeyspace("KeySpace")
val maxDF = csc.sql("SQL_QUERY" )
maxDF.show
输出:
+--------------------+--------------------+-----------------+--------------------+
| id| Col2| Col3| Col4|
+--------------------+--------------------+-----------------+--------------------+
|8wzloRMrGpf8Q3bbk...| Value1| X| K1|
|AxRfoHDjV1Fk18OqS...| Value2| Y| K2|
|FpMVRlaHsEOcHyDgy...| Value3| Z| K3|
|HERt8eFLRtKkiZndy...| Value4| U| K4|
|nWOcbbbm8ZOjUSNfY...| Value5| V| K5|
推荐答案
如果要以 scala 的形式显示列的整个值,则只需要从show
方法转换为false
:
If you want to print the whole value of a column, in scala, you just need to set the argument truncate from the show
method to false
:
maxDf.show(false)
,如果您希望显示20行以上:
and if you wish to show more than 20 rows :
// example showing 30 columns of
// maxDf untruncated
maxDf.show(30, false)
对于 pyspark ,您需要指定参数名称:
For pyspark, you'll need to specify the argument name :
maxDF.show(truncate = False)
这篇关于获取超过20行并在spark-shell中显示列的全部值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!