火花:是“计数"吗?分组数据是转型还是行动? [英] Spark: Is "count" on Grouped Data a Transformation or an Action?
问题描述
我知道 count
RDD或DataFrame是动作.但是在摆弄火花壳时,我观察到以下情况
I know that count
called on an RDD or a DataFrame is an action. But while fiddling with the spark shell, I observed the following
scala> val empDF = Seq((1,"James Gordon", 30, "Homicide"),(2,"Harvey Bullock", 35, "Homicide"),(3,"Kristen Kringle", 28, "Records"),(4,"Edward Nygma", 30, "Forensics"),(5,"Leslie Thompkins", 31, "Forensics")).toDF("id", "name", "age", "department")
empDF: org.apache.spark.sql.DataFrame = [id: int, name: string, age: int, department: string]
scala> empDF.show
+---+----------------+---+----------+
| id| name|age|department|
+---+----------------+---+----------+
| 1| James Gordon| 30| Homicide|
| 2| Harvey Bullock| 35| Homicide|
| 3| Kristen Kringle| 28| Records|
| 4| Edward Nygma| 30| Forensics|
| 5|Leslie Thompkins| 31| Forensics|
+---+----------------+---+----------+
scala> empDF.groupBy("department").count //count returned a DataFrame
res1: org.apache.spark.sql.DataFrame = [department: string, count: bigint]
scala> res1.show
+----------+-----+
|department|count|
+----------+-----+
| Homicide| 2|
| Records| 1|
| Forensics| 2|
+----------+-----+
当我在GroupedData(empDF.groupBy("department")
)上调用count
时,得到了另一个DataFrame作为结果(res1).这使我相信,在这种情况下,count
是一个转换.进一步支持以下事实:当我调用count
时没有触发任何计算,而是在我运行res1.show
时开始了计算.
When I called count
on GroupedData (empDF.groupBy("department")
), I got another DataFrame as the result (res1). This leads me to believe that count
in this case was a transformation. It is further supported by the fact that no computations were triggered when I called count
, instead, they started when I ran res1.show
.
我还找不到任何文档表明count
也可能是一种转换.有人可以帮忙澄清一下吗?
I haven't been able to find any documentation that suggests count
could be a transformation as well. Could someone please shed some light on this?
推荐答案
您在代码中使用的.count()
在RelationalGroupedDataset
之上,这将创建一个新列,其中包含分组数据集中的元素数量.这是转化.参考:
https://spark .apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.sql.GroupedDataset
The .count()
what you have used in your code is over RelationalGroupedDataset
, which creates a new column with count of elements in the grouped dataset. This is a transformation. Refer:
https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.sql.GroupedDataset
您通常在RDD/DataFrame/Dataset
上使用的.count()
与上面的完全不同,该.count()
是操作.请参阅: https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.rdd.RDD
The .count()
that you use normally over RDD/DataFrame/Dataset
is completely different from the above and this .count()
is an Action. Refer: https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.rdd.RDD
在对groupedDataSet进行操作时,始终将.count()
与.agg()
一起使用,以避免将来造成混淆:
always use .count()
with .agg()
while operating on groupedDataSet in order to avoid confusion in future:
empDF.groupBy($"department").agg(count($"department") as "countDepartment").show
这篇关于火花:是“计数"吗?分组数据是转型还是行动?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!