在spark-shell中将行拆分为多行 [英] Splitting row in multiple row in spark-shell
问题描述
我已经在spark-shell的Spark数据框中导入了数据.数据填充如下:
I have imported data in Spark dataframe in spark-shell. Data is filled in it like :
Col1 | Col2 | Col3 | Col4
A1 | 11 | B2 | a|b;1;0xFFFFFF
A1 | 12 | B1 | 2
A2 | 12 | B2 | 0xFFF45B
在Col4中,值是不同种类的,我想将它们分开(假设"a | b"是字母类型,"1或2"是数字类型,"0xFFFFFF或0xFFF45B"是十六进制编号的类型):
Here in Col4, the values are of different kinds and I want to separate them like (suppose "a|b" is type of alphabets, "1 or 2" is a type of digit and "0xFFFFFF or 0xFFF45B" is a type of hexadecimal no.):
因此,输出应为:
Col1 | Col2 | Col3 | alphabets | digits | hexadecimal
A1 | 11 | B2 | a | 1 | 0xFFFFFF
A1 | 11 | B2 | b | 1 | 0xFFFFFF
A1 | 12 | B1 | | 2 |
A2 | 12 | B2 | | | 0xFFF45B
希望我已经向您明确了我的查询,并且我使用的是spark-shell.预先感谢.
Hope I've made my query clear to you and I am using spark-shell. Thanks in advance.
推荐答案
在获得有关如何制作的答案之后进行编辑regexp_replace
中的向后引用.
Edit after getting this answer about how to make backreference in regexp_replace
.
您可以将regexp_replace
与反向引用一起使用,然后将split
两次并使用explode
. imo,比我原来的解决方案还干净
You can use regexp_replace
with a backreference, then split
twice and explode
. It is, imo, cleaner than my original solution
val df = List(
("A1" , "11" , "B2" , "a|b;1;0xFFFFFF"),
("A1" , "12" , "B1" , "2"),
("A2" , "12" , "B2" , "0xFFF45B")
).toDF("Col1" , "Col2" , "Col3" , "Col4")
val regExStr = "^([A-z|]+)?;?(\\d+)?;?(0x.*)?$"
val res = df
.withColumn("backrefReplace",
split(regexp_replace('Col4,regExStr,"$1;$2;$3"),";"))
.select('Col1,'Col2,'Col3,
explode(split('backrefReplace(0),"\\|")).as("letter"),
'backrefReplace(1) .as("digits"),
'backrefReplace(2) .as("hexadecimal")
)
+----+----+----+------+------+-----------+
|Col1|Col2|Col3|letter|digits|hexadecimal|
+----+----+----+------+------+-----------+
| A1| 11| B2| a| 1| 0xFFFFFF|
| A1| 11| B2| b| 1| 0xFFFFFF|
| A1| 12| B1| | 2| |
| A2| 12| B2| | | 0xFFF45B|
+----+----+----+------+------+-----------+
您仍然需要用null
替换空字符串...
you still need to replace empty strings by null
though...
这是一个固守在DataFrames上的解决方案,但也很混乱.您可以先使用regexp_extract
三次(可以用反向引用执行更少的操作吗?),最后可以在"|"上使用split
.和explode
.请注意,您需要为explode
合并以返回所有内容(在此解决方案中,您仍然可能希望将letter
中的空字符串更改为null
).
Here is a solution that sticks to DataFrames but is also quite messy. You can first use regexp_extract
three times (possible to do less with backreference?), and finally split
on "|" and explode
. Note that you need a coalesce for explode
to return everything (you still might want to change the empty strings in letter
to null
in this solution).
val res = df
.withColumn("alphabets", regexp_extract('Col4,"(^[A-z|]+)?",1))
.withColumn("digits", regexp_extract('Col4,"^([A-z|]+)?;?(\\d+)?;?(0x.*)?$",2))
.withColumn("hexadecimal",regexp_extract('Col4,"^([A-z|]+)?;?(\\d+)?;?(0x.*)?$",3))
.withColumn("letter",
explode(
split(
coalesce('alphabets,lit("")),
"\\|"
)
)
)
res.show
+----+----+----+--------------+---------+------+-----------+------+
|Col1|Col2|Col3| Col4|alphabets|digits|hexadecimal|letter|
+----+----+----+--------------+---------+------+-----------+------+
| A1| 11| B2|a|b;1;0xFFFFFF| a|b| 1| 0xFFFFFF| a|
| A1| 11| B2|a|b;1;0xFFFFFF| a|b| 1| 0xFFFFFF| b|
| A1| 12| B1| 2| null| 2| null| |
| A2| 12| B2| 0xFFF45B| null| null| 0xFFF45B| |
+----+----+----+--------------+---------+------+-----------+------+
注意:regexp部分使用反向引用可能会更好,因此,如果有人知道该怎么做,请发表评论!
Note: The regexp part could be so much better with backreference, so if somebody knows how to do it, please comment!
这篇关于在spark-shell中将行拆分为多行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!