ApacheSpark中的高效字符串匹配 [英] Efficient string matching in Apache Spark
本文介绍了ApacheSpark中的高效字符串匹配的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我使用OCR工具从截图中提取文本(每个截图大约1-5句)。但是,在手动验证提取的文本时,我注意到不时会出现几个错误。
考虑到文字"你好星火!我真的很喜欢😊❤️!",我注意到:
1)字母"i"、"!"和"l"被替换为"|"。
2)表情符号未正确提取并被其他字符替换或被省略。
3)不时删除空格。
结果,我可能会得到这样的字符串:"Hello here 7l|Real|y Like Spark!"
由于我正在尝试将这些字符串与包含正确文本的数据集(在本例中为"Hello Here😊!我真的很喜欢Spark❤️!")进行匹配,因此我正在寻找一种有效的方法来匹配Spark中的字符串。
谁能为Spark提出一个有效的算法,让我将摘录的文本(~100.000)与我的数据集(~1亿)进行比较?
推荐答案
我从一开始就不会使用Spark,但如果您真的致力于特定的堆栈,您可以组合一组ML转换器以获得最佳匹配。您需要Tokenizer
(或split
):
import org.apache.spark.ml.feature.RegexTokenizer
val tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens")
NGram
(例如3-gram)
import org.apache.spark.ml.feature.NGram
val ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams")
Vectorizer
(例如CountVectorizer
或HashingTF
):
import org.apache.spark.ml.feature.HashingTF
val vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors")
和LSH
:
import org.apache.spark.ml.feature.{MinHashLSH, MinHashLSHModel}
// Increase numHashTables in practice.
val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh")
与Pipeline
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline().setStages(Array(tokenizer, ngram, vectorizer, lsh))
适合示例数据:
val query = Seq("Hello there 7l | real|y like Spark!").toDF("text")
val db = Seq(
"Hello there 😊! I really like Spark ❤️!",
"Can anyone suggest an efficient algorithm"
).toDF("text")
val model = pipeline.fit(db)
同时转换:
val dbHashed = model.transform(db)
val queryHashed = model.transform(query)
和联接
model.stages.last.asInstanceOf[MinHashLSHModel]
.approxSimilarityJoin(dbHashed, queryHashed, 0.75).show
+--------------------+--------------------+------------------+
| datasetA| datasetB| distCol|
+--------------------+--------------------+------------------+
|[Hello there 😊! ...|[Hello there 7l |...|0.5106382978723405|
+--------------------+--------------------+------------------+
同样的方法也可以在Pyspark中使用
from pyspark.ml import Pipeline
from pyspark.ml.feature import RegexTokenizer, NGram, HashingTF, MinHashLSH
query = spark.createDataFrame(
["Hello there 7l | real|y like Spark!"], "string"
).toDF("text")
db = spark.createDataFrame([
"Hello there 😊! I really like Spark ❤️!",
"Can anyone suggest an efficient algorithm"
], "string").toDF("text")
model = Pipeline(stages=[
RegexTokenizer(
pattern="", inputCol="text", outputCol="tokens", minTokenLength=1
),
NGram(n=3, inputCol="tokens", outputCol="ngrams"),
HashingTF(inputCol="ngrams", outputCol="vectors"),
MinHashLSH(inputCol="vectors", outputCol="lsh")
]).fit(db)
db_hashed = model.transform(db)
query_hashed = model.transform(query)
model.stages[-1].approxSimilarityJoin(db_hashed, query_hashed, 0.75).show()
# +--------------------+--------------------+------------------+
# | datasetA| datasetB| distCol|
# +--------------------+--------------------+------------------+
# |[Hello there 😊! ...|[Hello there 7l |...|0.5106382978723405|
# +--------------------+--------------------+------------------+
相关
这篇关于ApacheSpark中的高效字符串匹配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文