如何在apache spark java中使用hadoop office库将数据集写入excel文件 [英] How to write Dataset to a excel file using hadoop office library in apache spark java

查看:51
本文介绍了如何在apache spark java中使用hadoop office库将数据集写入excel文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前我正在使用 com.crealytics.spark.excel 读取 Excel 文件,但使用此库我无法将数据集写入 Excel 文件.

Currently I am using com.crealytics.spark.excel to read an Excel file, but using this library I can't write the dataset to an Excel file.

这个link 说使用hadoop办公库(org.zuinnote.spark.office.excel)我们可以读写Excel文件

This link says that using hadoop office library (org.zuinnote.spark.office.excel) we can read and write to Excel files

请帮我在 spark java 中将数据集对象写入 excel 文件.

推荐答案

您可以使用 org.zuinnote.spark.office.excel 使用 Dataset 读取和写入 excel 文件.示例在 https://github.com/ZuInnoTe/spark-hadoopoffice-ds/.但是,如果您在 Dataset 中读取 Excel 并尝试将其写入另一个 Excel 文件,则会出现一个问题.请在 https://github.com/ZuInnoTe/hadoopoffice/issues 查看 scala 中的问题和解决方法/12.

You can use org.zuinnote.spark.office.excel for both reading and writing excel file using Dataset. Examples are given at https://github.com/ZuInnoTe/spark-hadoopoffice-ds/. However, there is one issue if you read the Excel in Dataset and try to write it in another Excel file. Please see the issue and workaround in scala at https://github.com/ZuInnoTe/hadoopoffice/issues/12.

我使用 org.zuinnote.spark.office.excel 用 Ja​​va 编写了一个示例程序,并在该链接中给出了解决方法.请看看这是否对您有帮助.

I have written a sample program in Java using org.zuinnote.spark.office.excel and workaround given at that link. Please see if this helps you.

public class SparkExcel {
    public static void main(String[] args) {
        //spark session
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkExcel")
                .master("local[*]")
                .getOrCreate();

        //Read
        Dataset<Row> df = spark
                .read()
                .format("org.zuinnote.spark.office.excel")
                .option("read.locale.bcp47", "de")
                .load("c:\\temp\\test1.xlsx");

        //Print
        df.show();
        df.printSchema();

        //Flatmap function
        FlatMapFunction<Row, String[]> flatMapFunc = new FlatMapFunction<Row, String[]>() {
            @Override
            public Iterator<String[]> call(Row row) throws Exception {
                ArrayList<String[]> rowList = new ArrayList<String[]>();
                List<Row> spreadSheetRows = row.getList(0);
                for (Row srow : spreadSheetRows) {
                    ArrayList<String> arr = new ArrayList<String>();
                    arr.add(srow.getString(0));
                    arr.add(srow.getString(1));
                    arr.add(srow.getString(2));
                    arr.add(srow.getString(3));
                    arr.add(srow.getString(4));
                    rowList.add(arr.toArray(new String[] {}));
                }
                return rowList.iterator();
            }
        };

        //Apply flatMap function
        Dataset<String[]> df2 = df.flatMap(flatMapFunc, spark.implicits().newStringArrayEncoder());

        //Write
        df2.write()
           .mode(SaveMode.Overwrite)
           .format("org.zuinnote.spark.office.excel")
           .option("write.locale.bcp47", "de")
           .save("c:\\temp\\test2.xlsx");

    }
}

我已经用 Java 8 和 Spark 2.1.0 测试过这段代码.我正在使用 maven 并从 https://mvnrepository.com/artifact/com.github.zuinnote/spark-hadoopoffice-ds_2.11/1.0.3

I have tested this code with Java 8 and Spark 2.1.0. I am using maven and added dependency for org.zuinnote.spark.office.excel from https://mvnrepository.com/artifact/com.github.zuinnote/spark-hadoopoffice-ds_2.11/1.0.3

这篇关于如何在apache spark java中使用hadoop office库将数据集写入excel文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆