如何强制 spark 避免数据集重新计算? [英] How to force spark to avoid Dataset re-computation?
问题描述
我有一个从 cassandra 中加载的数据集.加载此数据集后,我将从 cassandra 中删除一些项目,但我希望我的数据集作为下一个计算的第一个.我已经使用 persist(DISK_ONLY)
来解决它,但它似乎是尽力而为.如何强制 spark 避免重新计算?
I've a dataset which is loaded from cassandra in spark. After loading this dataset, I will remove some of the items from cassandra, but I want my dataset being as first for the next computation. I've used persist(DISK_ONLY)
to solve it, but it seems to best effort.
How can I force spark to avoid re-computation?
示例:
val dataset:Dataset[Int] = ??? // something from cassandra
dataset.persist(StorageLevel.DISK_ONLY) // it's best effort
dataset.count // = 2n
dataset.persist(_ % 2 == 0).remove // remove from cassandra
data.count // = n => I need orginal dataset here
推荐答案
Spark cache
不打算以这种方式使用.这是一种优化,即使使用最保守的 StorageLevels
(DISK_ONLY_2
),数据可能会丢失并在工作器故障或退役时重新计算.
Spark cache
is not intended to be used this way. It is an optimization, and even with the most conservative StorageLevels
(DISK_ONLY_2
), data can be lost and recomputed in case of worker failure or decommissioning.
Checkpoint
到可靠的文件系统可能是更好的选择,但我怀疑可能存在一些边界情况,这可能会导致数据丢失.
Checkpoint
to a reliable file system might be a better option, but I suspect there might be some border cases, which can result in the data loss.
确保正确性我强烈建议至少将中间数据写入持久存储,例如分布式文件系统,然后读回:
Yo ensure correctness I would strongly recommend at least writing intermediate data to a persistent storage, like distributed file system, and reading it back:
dataset.write.format(...).save("persisted/location")
... // Remove data from the source
spark.read.format(...).load("persisted/location") //reading the same again
这篇关于如何强制 spark 避免数据集重新计算?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!