Java Spark:GroupBy上的堆栈溢出错误 [英] Java Spark : Stack Overflow Error on GroupBy
问题描述
我在 Java 中使用 Spark 2.3.1.
I am using Spark 2.3.1 with Java.
我有一个数据集,我想对其进行分组以进行一些聚合(例如,假设为 count()).必须根据给定的列列表进行分组.
I have a Dataset, which I want to group to make some aggregations (let's say a count() for the example). The grouping must be done according to a given list of columns.
我的功能如下:
public Dataset<Row> compute(Dataset<Row> data, List<String> columns){
final List<Column> columns_col = new ArrayList<Column>();
for (final String tag : columns) {
columns_col.add(new Column(tag));
}
Seq<Column> columns_seq = JavaConverters.asScalaIteratorConverter(columns_col.iterator()).asScala().toSeq();
System.out.println("My columns : "+columns_seq.mkString(", "));
System.out.println("Data count : "+data.count());
final Dataset<Row> dataset_count = data.groupBy(columns_seq).agg(count(col("value")));
System.out.println("Result count : "+dataset_count.count());
return dataset_count;
}
当我这样称呼它时:
Dataset<Row> df = compute(MyDataset, Arrays.asList("field1","field2","field3","field4"));
我在 dataset_count.count() 上有一个 StackOverflowError :
I have a StackOverflowError on the dataset_count.count() :
My columns : field1, field2, field3, field4
Data count : 136821
Exception in thread "main" java.lang.StackOverflowError
at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
at scala.collection.immutable.Stream.drop(Stream.scala:858)
at scala.collection.immutable.Stream.drop(Stream.scala:202)
at scala.collection.LinearSeqOptimized$class.apply(LinearSeqOptimized.scala:64)
at scala.collection.immutable.Stream.apply(Stream.scala:202)
...
但是如果我在我的函数中替换该行
But if I replace in my functions the line
final Dataset<Row> dataset_count = data.groupBy(columns_seq).agg(count(col("value")));
由
final Dataset<Row> dataset_count = data.groupBy("field1","field2","field3","field4").agg(count(col("value")));
我没有错误,我的程序计算得很好:
I have no error, and my program compute well :
My columns : field1, field2, field3, field4
Data count : 136821
Result count : 74698
这个问题可能来自哪里,是否有根据未知列的列表对数据集进行分组的解决方案?
Where does this problem might come from and is there a solution for grouping dataset according to a list of unknown columns ?
推荐答案
尝试改用这个:
Seq<Column> columns_seq = JavaConversions.asScalaBuffer(columns_col).seq();
这篇关于Java Spark:GroupBy上的堆栈溢出错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!