SparklyR将一个Spark DataFrame列分成两列 [英] SparklyR separate one Spark DataFrame column into two columns
问题描述
我有一个包含名为 COL 的列的数据框,其结构如下:
VALUE1# ## VALUE2
以下代码正在运行
library(sparklyr)
库(tidyr)
库(dplyr)
mParams< - collect(filter(input_DF,TYPE ==('MIN')))
mParams< - separate(mParams,COL,c('col1','col2'),'\\ ###',remove = FALSE)
如果我删除收集
,我收到这个错误:
pre>
UseMethod(separate_)中的错误:
不适用于'separate_'应用于类c('tbl_spark','tbl_sql' 'tbl_lazy','tbl')
有没有什么可以实现我想要的,但没有收集我的火花司机上的一切
Sparklyr版本0.5刚刚被发布,它包含 ft_regex_tokenizer()
功能,可以这样做:
一个基于正则表达式的tokenizer,它通过使用
提供的regex模式来提取令牌,以分割与正则表达式匹配的文本(默认)或重复
(如果差距为false) 。
库(dplyr)
库(sparklyr)
ft_regex_tokenizer (input_DF,input.col =COL,output.col =ResultCols,pattern ='\\ ###')
分解的列ResultCols将是一个列表。
I have a dataframe containing a column named COL which is structured in this way:
VALUE1###VALUE2
The following code is working
library(sparklyr)
library(tidyr)
library(dplyr)
mParams<- collect(filter(input_DF, TYPE == ('MIN')))
mParams<- separate(mParams, COL, c('col1','col2'), '\\###', remove=FALSE)
If I remove the collect
, I get this error:
Error in UseMethod("separate_") :
no applicable method for 'separate_' applied to an object of class "c('tbl_spark', 'tbl_sql', 'tbl_lazy', 'tbl')"
Is there any alternative to achieve what I want, but without collecting everything on my spark driver?
Sparklyr version 0.5 has just been released, and it contains the ft_regex_tokenizer()
function that can do that:
A regex based tokenizer that extracts tokens either by using the provided regex pattern to split the text (default) or repeatedly matching the regex (if gaps is false).
library(dplyr)
library(sparklyr)
ft_regex_tokenizer(input_DF, input.col = "COL", output.col = "ResultCols", pattern = '\\###')
The splitted column "ResultCols" will be a list.
这篇关于SparklyR将一个Spark DataFrame列分成两列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!