PySpark:将一个DataFrame列的值与另一个DataFrame列进行匹配 [英] PySpark: match the values of a DataFrame column against another DataFrame column
本文介绍了PySpark:将一个DataFrame列的值与另一个DataFrame列进行匹配的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
在Pandas DataFrame中,我可以使用DataFrame.isin()
函数将列值与另一列进行匹配.
In Pandas DataFrame, I can use DataFrame.isin()
function to match the column values against another column.
例如: 假设我们有一个DataFrame:
For example: suppose we have one DataFrame:
df_A = pd.DataFrame({'col1': ['A', 'B', 'C', 'B', 'C', 'D'],
'col2': [1, 2, 3, 4, 5, 6]})
df_A
col1 col2
0 A 1
1 B 2
2 C 3
3 B 4
4 C 5
5 D 6
和另一个DataFrame:
and another DataFrame:
df_B = pd.DataFrame({'col1': ['C', 'E', 'D', 'C', 'F', 'G', 'H'],
'col2': [10, 20, 30, 40, 50, 60, 70]})
df_B
col1 col2
0 C 10
1 E 20
2 D 30
3 C 40
4 F 50
5 G 60
6 H 70
我可以使用.isin()
函数将df_B
的列值与df_A
的列值进行匹配
I can use .isin()
function to match the column values of df_B
against the column values of df_A
例如:
df_B[df_B['col1'].isin(df_A['col1'])]
产量:
col1 col2
0 C 10
2 D 30
3 C 40
PySpark DataFrame中的等效操作是什么?
df_A = pd.DataFrame({'col1': ['A', 'B', 'C', 'B', 'C', 'D'],
'col2': [1, 2, 3, 4, 5, 6]})
df_A = sqlContext.createDataFrame(df_A)
df_B = pd.DataFrame({'col1': ['C', 'E', 'D', 'C', 'F', 'G', 'H'],
'col2': [10, 20, 30, 40, 50, 60, 70]})
df_B = sqlContext.createDataFrame(df_B)
df_B[df_B['col1'].isin(df_A['col1'])]
上面的.isin()
代码给我一条错误消息:
The .isin()
code above gives me an error messages:
u'resolved attribute(s) col1#9007 missing from
col1#9012,col2#9013L in operator !Filter col1#9012 IN
(col1#9007);;\n!Filter col1#9012 IN (col1#9007)\n+-
LogicalRDD [col1#9012, col2#9013L]\n'
推荐答案
这种操作在spark中称为左半联接":
This kind of operation is called left semi join in spark:
df_B.join(df_A, ['col1'], 'leftsemi')
这篇关于PySpark:将一个DataFrame列的值与另一个DataFrame列进行匹配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文