保存阵列< T>使用Java在BigQuery中 [英] Save Array<T> in BigQuery using Java

查看:38
本文介绍了保存阵列< T>使用Java在BigQuery中的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用Spark Big Query连接器将数据保存到Big查询中.假设我有一个类似下面的Java pojo

I'm trying to save data into Big query using Spark Big Query connector. Let say I have a Java pojo like below

@Getter
@Setter
@AllArgsConstructor
@ToString
@Builder
public class TagList {
    private String s1;
    private List<String> s2;
}

现在,当我尝试将Pojo保存到Big查询中时,它使我陷入错误之下

Now when I try to save this Pojo into Big query its throwing me below error

Caused by: com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryException: Failed to load to test_table1 in job JobId{project=<project_id>, job=<job_id>, location=US}. BigQuery error was Provided Schema does not match Table <Table_Name>. Field s2 has changed type from STRING to RECORD
    at com.google.cloud.spark.bigquery.BigQueryWriteHelper.loadDataToBigQuery(BigQueryWriteHelper.scala:156)
    at com.google.cloud.spark.bigquery.BigQueryWriteHelper.writeDataFrameToBigQuery(BigQueryWriteHelper.scala:89)
    ... 35 more

示例代码:

Dataset<TagList> mapDS = inputDS.map((MapFunction<Row, TagList>) x -> {
                List<String> list = new ArrayList<>();
                list.add(x.get(0).toString());
                list.add("temp1");
return TagList.builder()
                    .s1("Hello World")
                    .s2(list).build();
        }, Encoders.bean(TagList.class));

        mapDS.write().format("bigquery")
                .option("temporaryGcsBucket","<bucket_name>")
                .option("table", "<table_name>")
                .option("project", projectId)
                .option("parentProject", projectId)
                .mode(SaveMode.Append)
                .save();

大查询表:

create table <dataset>.<table_name> (
  s1 string,
  s2 array<string>,
  )
  PARTITION BY
  TIMESTAMP_TRUNC(_PARTITIONTIME, HOUR);

推荐答案

请将中间格式更改为AVRO或ORC.使用Parquet时,序列化会创建一个中间结构.详情请参见 https://github.com/GoogleCloudDataproc/spark-bigquery-connector#properties

Please change the intermediateFormat to AVRO or ORC. When using Parquet, the serialization creates an intermediate structure. See more at https://github.com/GoogleCloudDataproc/spark-bigquery-connector#properties

这篇关于保存阵列&lt; T&gt;使用Java在BigQuery中的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆