用于行类型Spark数据集的编码器 [英] Encoder for Row Type Spark Datasets
问题描述
我想为行类型。基本上,我不明白如何编写编码器。
I would like to write an encoder for a Row type in DataSet, for a map operation that I am doing. Essentially, I do not understand how to write encoders.
下面是一个地图操作示例:
Below is an example of a map operation:
在下面的示例中,我不想返回数据集< String>,而是返回数据集< Row>
Dataset<String> output = dataset1.flatMap(new FlatMapFunction<Row, String>() {
@Override
public Iterator<String> call(Row row) throws Exception {
ArrayList<String> obj = //some map operation
return obj.iterator();
}
},Encoders.STRING());
据我所知,编码器需要编写如下代码:
I understand that instead of a string Encoder needs to be written as follows:
Encoder<Row> encoder = new Encoder<Row>() {
@Override
public StructType schema() {
return join.schema();
//return null;
}
@Override
public ClassTag<Row> clsTag() {
return null;
}
};
但是,我不理解编码器中的clsTag(),我试图找到一个运行示例,可以演示类似的东西(即行类型的编码器)
However, I do not understand the clsTag() in the encoder, and I am trying to find a running example which can demostrate something similar (i.e. an encoder for a row type)
编辑 - 这不是所提问题的副本:尝试将数据帧行映射到更新行时出现编码器错误回答有关在Spark 2.x中使用Spark 1.x的讨论(我没有这样做),我也在寻找Row类的编码器,而不是解决错误。最后,我一直在寻找Java解决方案,而不是Scala。
Edit - This is not a copy of the question mentioned : Encoder error while trying to map dataframe row to updated row as the answer talks about using Spark 1.x in Spark 2.x (I am not doing so), also I am looking for an encoder for a Row class rather than resolve an error. Finally, I was looking for a solution in Java, not in Scala.
推荐答案
答案是使用 RowEncoder 和使用 StructType 。
以下是使用数据集的flatmap操作的一个工作示例:
Below is a working example of a flatmap operation with Datasets:
StructType structType = new StructType();
structType = structType.add("id1", DataTypes.LongType, false);
structType = structType.add("id2", DataTypes.LongType, false);
ExpressionEncoder<Row> encoder = RowEncoder.apply(structType);
Dataset<Row> output = join.flatMap(new FlatMapFunction<Row, Row>() {
@Override
public Iterator<Row> call(Row row) throws Exception {
// a static map operation to demonstrate
List<Object> data = new ArrayList<>();
data.add(1l);
data.add(2l);
ArrayList<Row> list = new ArrayList<>();
list.add(RowFactory.create(data.toArray()));
return list.iterator();
}
}, encoder);
这篇关于用于行类型Spark数据集的编码器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!