读取有序文件时,Spark是否保留记录顺序? [英] Does Spark preserve record order when reading in ordered files?

查看:75
本文介绍了读取有序文件时,Spark是否保留记录顺序?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Spark读取记录(在本例中为csv文件)并对其进行处理.文件已经按某种顺序排列,但是任何列都没有反映该顺序(可以将其视为一个时间序列,但没有任何时间戳列-每行只是文件中的相对顺序).我想在我的Spark处理中使用此排序信息,以进行诸如将一行与上一行进行比较之类的操作.由于没有排序列,因此我无法明确地对记录进行排序.

I'm using Spark to read in records (in this case in csv files) and process them. The files are already in some order, but this order isn't reflected by any column (think of it as a time series, but without any timestamp column -- each row is just in a relative order within the file). I'd like to use this ordering information in my Spark processing, to do things like comparing a row with the previous row. I can't explicitly order the records, since there is no ordering column.

Spark是否保持从文件中读取记录的顺序?或者,是否有任何方法可以从Spark访问记录的文件顺序?

Does Spark maintain the order of records it reads in from a file? Or, is there any way to access the file-order of records from Spark?

推荐答案

是的,当从文件中读取时,Spark会保持记录的顺序.但是,在发生改组时,不会保留顺序.因此,为了保留顺序,您需要进行编程以使数据中不发生改组,或者创建序列.编号到记录,并使用这些序列.处理中的数字.

Yes, when reading from file, Spark maintains the order of records. But when shuffling occurs, the order is not preserved. So in order to preserve the order, either you need to program so that no shuffling occurs in data or you create a seq. numbers to the records and use those seq. numbers while processing.

在像Spark这样的分布式框架中,将数据分为簇以进行快速处理,因此肯定会发生数据混排.因此最好的解决方案是为每行创建一个序列号,并使用该序列号进行排序.

In a distribute framework like Spark where data is divided in cluster for fast processing, shuffling of data is sure to occur. So the best solution is create a sequential numbers to each rows and use that sequential number for ordering.

这篇关于读取有序文件时,Spark是否保留记录顺序?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆