如何为每个记录生成唯一ID [英] How to generate unique id for each record spark
问题描述
我有一个包含MM +记录的庞大数据集,我正在尝试为每条记录分配唯一ID。我试过下面的代码但是由于行id是顺序的,所以需要很多时间。我尝试调整内存参数以优化作业,无法获得太多性能。
I have a huge datasets with MM+ records and I am trying to assign unique id to each record. I tried below code but it takes lot of time as row id is sequential. I have tried tweak memory parameters to optimize job, couldn't gain much performance.
示例代码:
JavaRDD<String> rawRdd=......
rawRdd.zipWithIndex()
.mapToPair(t->new Tuple2<Long,String>(t._2,t._1))
有没有更好的方法来分配唯一ID?谢谢
Are there any better way to assign unique id? thanks
推荐答案
方法1:如果您的要求只是分配唯一ID,您可以使用UUID作为唯一行ID:
Approach 1: if you requirement is just to assign unique id, you may use UUID as unique row id:
rawRdd.mapToPair(t->new Tuple2<String,String>(t,UUID.randomUUID().toString()));
唯一的缺点是id长度是36个字节。
Only drawback is that the id length is 36 bytes.
方法2:创建一个集中系统来分配唯一ID。我使用基于REST的API,它遵循模式生成id,每个map操作都调用REST服务来获取唯一id。
Approach 2: Create a centralize system to assign unique id. I use REST based API which follow a pattern to generate id and each map operation calls REST service to get unique id.
第二种方法让您可以完全控制设计id的模式。
2nd approach gives you full control to design the pattern for id.
这篇关于如何为每个记录生成唯一ID的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!