如何在udf中使用广播收藏集? [英] How to use a broadcast collection in a udf?

查看:120
本文介绍了如何在udf中使用广播收藏集?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何在Spark SQL 1.6.1 udf中使用广播集合.应当从Main SQL中调用Udf,如下所示

How to use a broadcast collection in Spark SQL 1.6.1 udf. Udf should be called from Main SQL as shown below

sqlContext.sql("""Select col1,col2,udf_1(key) as value_from_udf FROM table_a""")

udf_1()应该查看广播的小集合,以将值返回给主sql.

udf_1() should be looking through a broadcast small collection to return value to main sql.

推荐答案

下面是pySpark中的一个最小可重现的示例,它说明了使用广播变量执行查找的过程,其中将lambda函数用作UDF SQL语句.

Here's a minimal reproducible example in pySpark, illustrating the use of broadcast variables to perform lookups, employing a lambda function as an UDF inside a SQL statement.

# Create dummy data and register as table
df = sc.parallelize([
    (1,"a"),
    (2,"b"),
    (3,"c")]).toDF(["num","let"])
df.registerTempTable('table')

# Create broadcast variable from local dictionary
myDict = {1: "y", 2: "x", 3: "z"}
broadcastVar = sc.broadcast(myDict) 
# Alternatively, if your dict is a key-value rdd, 
# you can do sc.broadcast(rddDict.collectAsMap())

# Create lookup function and apply it
sqlContext.registerFunction("lookup", lambda x: broadcastVar.value.get(x))
sqlContext.sql('select num, let, lookup(num) as test from table').show()
+---+---+----+
|num|let|test|
+---+---+----+
|  1|  a|   y|
|  2|  b|   x|
|  3|  c|   z|
+---+---+----+

这篇关于如何在udf中使用广播收藏集?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆