在Spark Cassandra连接器中映射UUID [英] Mapping UUID in Spark Cassandra connector

查看:77
本文介绍了在Spark Cassandra连接器中映射UUID的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有以下代码将RDD保存到cassandra:

I have the following code to save RDD to cassandra:

 JavaRDD<UserByID> mapped = ......

CassandraJavaUtil.javaFunctions(mapped)
.writerBuilder("mykeyspace", "user_by_id", mapToRow(UserByID.class)).saveToCassandra();

UserByID 是具有以下内容的常规可序列化POJO带有getter和setter的以下变量

And UserByID is a normal serializable POJO with the following variable with getters and setters

private UUID userid;

Cassandra表具有与类UserByID变量完全相同的名称,而userid在Cassandra中的类型为uuid

Cassandra table has exactly the same names of the class UserByID variables, and userid is of type uuid in Cassandra table, I am loading data successfully from the table using the same class mapping.

CassandraJavaRDD<UserByID> UserByIDRDD = javaFunctions(spark)
 .cassandraTable("mykeyspace", "user_by_id", mapRowTo(UserByID.class));

但是,当我在上面调用 saveToCassandra 函数时,出现以下异常:

however, when I call saveToCassandra function above, I get the following exception:

org.apache.spark.SparkException: Job aborted due to stage failure: Task
0 in stage 227.0 failed 1 times, most recent failure: Lost task 0.0
in stage 227.0 (TID 12721, localhost, executor driver): 
java.lang.IllegalArgumentException: 
The value (4e22e71a-a387-4de8-baf1-0ef6e65fe33e) of the type 
(java.util.UUID) cannot be converted to 
struct<leastSignificantBits:bigint,mostSignificantBits:bigint> 

要解决该问题,我已经注册了UUID编解码器,但没有帮助,我使用了 spark-cassandra-connector_2.11 版本2.4.0和相同版本的 spark-core_2.11 有什么建议吗?

To solve the problem I have registered UUID codec, but that didn't help, I am using spark-cassandra-connector_2.11 version 2.4.0 and the same version for spark-core_2.11 any suggestion?

我的参考是此处,但是它没有Java UUID示例,感谢您的帮助。

my reference is here but it has no Java UUID example, your help is appreciated.

推荐答案

奇怪的错误-只能在连接器2.4.0& Spark 2.2.1具有以下示例:

That's really strange error - this just works fine with connector 2.4.0 & Spark 2.2.1 with following example:

表定义:

CREATE TABLE test.utest (
    id int PRIMARY KEY,
    u uuid
);

POJO类

public class UUIDData {
    private UUID u;
    private int id;
    ...
    // getters/setters
};

火花作业

public static void main(String[] args) {
    SparkSession spark = SparkSession
            .builder()
            .appName("UUIDTest")
            .getOrCreate();

    CassandraJavaRDD<UUIDData> uuids = javaFunctions(spark.sparkContext())
            .cassandraTable("test", "utest", mapRowTo(UUIDData.class));

    JavaRDD<UUIDData> uuids2 = uuids.map(x -> new UUIDData(x.getId() + 10, x.getU()));

    CassandraJavaUtil.javaFunctions(uuids2)
            .writerBuilder("test", "utest", mapToRow(UUIDData.class))
            .saveToCassandra();
}

我注意到您的代码中使用的是<$ c函数$ c> mapRowTo 和 mapToRow 而不在POJO上调用 .class -您确定您的代码已编译并且您没有运行任何旧版本的代码?

I've noticed that in your code you're using functions mapRowTo and mapToRow without calling the .class on POJO - are you sure that your code compiled and you don't run any old version of the code?

这篇关于在Spark Cassandra连接器中映射UUID的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆