行类型 Spark 数据集的编码器 [英] Encoder for Row Type Spark Datasets

查看:31
本文介绍了行类型 Spark 数据集的编码器的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想为 Row 在 DataSet 中键入,用于我正在执行的地图操作.本质上,我不明白如何编写编码器.

I would like to write an encoder for a Row type in DataSet, for a map operation that I am doing. Essentially, I do not understand how to write encoders.

下面是一个地图操作的例子:

Below is an example of a map operation:

在下面的例子中,我不想返回Dataset,而是返回Dataset

Dataset<String> output = dataset1.flatMap(new FlatMapFunction<Row, String>() {
            @Override
            public Iterator<String> call(Row row) throws Exception {

                ArrayList<String> obj = //some map operation
                return obj.iterator();
            }
        },Encoders.STRING());

据我所知,Encoder 需要写成如下而不是字符串:

I understand that instead of a string Encoder needs to be written as follows:

    Encoder<Row> encoder = new Encoder<Row>() {
        @Override
        public StructType schema() {
            return join.schema();
            //return null;
        }

        @Override
        public ClassTag<Row> clsTag() {
            return null;
        }
    };

但是,我不理解编码器中的 clsTag(),我试图找到一个可以演示类似内容的运行示例(即行类型的编码器)

However, I do not understand the clsTag() in the encoder, and I am trying to find a running example which can demostrate something similar (i.e. an encoder for a row type)

编辑 - 这不是提到的问题的副本:尝试将数据帧行映射到更新行时出现编码器错误 因为答案谈到在 Spark 2.x 中使用 Spark 1.x(我没有这样做),我也在寻找编码器对于 Row 类,而不是解决错误.最后,我正在寻找 Java 的解决方案,而不是 Scala.

Edit - This is not a copy of the question mentioned : Encoder error while trying to map dataframe row to updated row as the answer talks about using Spark 1.x in Spark 2.x (I am not doing so), also I am looking for an encoder for a Row class rather than resolve an error. Finally, I was looking for a solution in Java, not in Scala.

推荐答案

答案是使用 RowEncoder 和使用 StructType.

The answer is to use a RowEncoder and the schema of the dataset using StructType.

以下是使用数据集进行平面图操作的工作示例:

Below is a working example of a flatmap operation with Datasets:

    StructType structType = new StructType();
    structType = structType.add("id1", DataTypes.LongType, false);
    structType = structType.add("id2", DataTypes.LongType, false);

    ExpressionEncoder<Row> encoder = RowEncoder.apply(structType);

    Dataset<Row> output = join.flatMap(new FlatMapFunction<Row, Row>() {
        @Override
        public Iterator<Row> call(Row row) throws Exception {
            // a static map operation to demonstrate
            List<Object> data = new ArrayList<>();
            data.add(1l);
            data.add(2l);
            ArrayList<Row> list = new ArrayList<>();
            list.add(RowFactory.create(data.toArray()));
            return list.iterator();
        }
    }, encoder);

这篇关于行类型 Spark 数据集的编码器的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆