Google Cloud Dataflow从字典写入CSV [英] Google Cloud Dataflow Write to CSV from dictionary

查看:76
本文介绍了Google Cloud Dataflow从字典写入CSV的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个值字典,我想使用Python SDK将其作为有效的.CSV文件写入GCS.我可以将字典写成换行符分隔的文本文件,但似乎找不到将字典转换为有效.CSV的示例.有人可以建议在数据流管道中生成csv的最佳方法吗?这可以回答问题地址读取CSV文件,但实际上并不能解决写入CSV文件的问题.我知道CSV文件只是带有规则的文本文件,但我仍在努力将数据字典转换为可以使用WriteToText编写的CSV.

I have a dictionary of values that I would like to write to GCS as a valid .CSV file using the Python SDK. I can write the dictionary out as newline separated text file, but I can't seem to find an example converting the dictionary to a valid .CSV. Can anybody suggest the best way to generate csv's within a dataflow pipeline? This answers to this question address Reading from CSV files, but don't really address writing to CSV files. I recognize that CSV files are just text files with rules, but I'm still struggling to convert the dictionary of data to a CSV that can be written using WriteToText.

这是一个简单的示例字典,我想将其转换为CSV:

Here is a simple example dictionary that I would like to turn into a CSV:

test_input = [{'label': 1, 'text': 'Here is a sentence'},
              {'label': 2, 'text': 'Another sentence goes here'}]


test_input  | beam.io.WriteToText(path_to_gcs)

以上内容将导致一个文本文件,其中每个字典都位于换行符上.我可以利用Apache Beam内的任何功能(类似于 csv.DictWriter )?

The above would result in a text file that had each dictionary on a newline. Is there any functionality within Apache Beam that I can take advantage of (similar to csv.DictWriter)?

推荐答案

通常,您将要编写一个函数,该函数可以将原始的dict数据元素转换为csv格式的string表示形式.

Generally you will want to write a function that can convert your original dict data elements into a csv-formatted string representation.

该函数可以写为DoFn,您可以将其应用于数据束PCollection,它将把每个集合元素转换为所需的格式;您可以通过ParDoDoFn应用于PCollection来实现.您还可以将此DoFn包装在更加用户友好的PTransform中.

That function can be written as a DoFn that you can apply to your Beam PCollection of data, which would convert each collection element into the desired format; you can do this by applying the DoFn to your PCollection via ParDo. You can also wrap this DoFn in a more user-friendly PTransform.

您可以在 Beam编程指南中了解有关此过程的更多信息.

You can learn more about this process in the Beam Programming Guide

这是一个简单的,可翻译的非Beam示例:

Here is a simple, translatable non-Beam example:

# Our example list of dictionary elements
test_input = [{'label': 1, 'text': 'Here is a sentence'},
             {'label': 2, 'text': 'Another sentence goes here'}]

def convert_my_dict_to_csv_record(input_dict):
    """ Turns dictionary values into a comma-separated value formatted string """
    return ','.join(map(str, input_dict.values()))

# Our converted list of elements
converted_test_input = [convert_my_dict_to_csv_record(element) for element in test_input]

converted_test_input如下所示:

['Here is a sentence,1', 'Another sentence goes here,2']

使用DictWriter

from csv import DictWriter
from csv import excel
from cStringIO import StringIO

...

def _dict_to_csv(element, column_order, missing_val='', discard_extras=True, dialect=excel):
    """ Additional properties for delimiters, escape chars, etc via an instance of csv.Dialect
        Note: This implementation does not support unicode
    """

    buf = StringIO()

    writer = DictWriter(buf,
                        fieldnames=column_order,
                        restval=missing_val,
                        extrasaction=('ignore' if discard_extras else 'raise'),
                        dialect=dialect)
    writer.writerow(element)

    return buf.getvalue().rstrip(dialect.lineterminator)


class _DictToCSVFn(DoFn):
    """ Converts a Dictionary to a CSV-formatted String

        column_order: A tuple or list specifying the name of fields to be formatted as csv, in order
        missing_val: The value to be written when a named field from `column_order` is not found in the input element
        discard_extras: (bool) Behavior when additional fields are found in the dictionary input element
        dialect: Delimiters, escape-characters, etc can be controlled by providing an instance of csv.Dialect

    """

    def __init__(self, column_order, missing_val='', discard_extras=True, dialect=excel):
        self._column_order = column_order
        self._missing_val = missing_val
        self._discard_extras = discard_extras
        self._dialect = dialect

    def process(self, element, *args, **kwargs):
        result = _dict_to_csv(element,
                              column_order=self._column_order,
                              missing_val=self._missing_val,
                              discard_extras=self._discard_extras,
                              dialect=self._dialect)

        return [result,]

class DictToCSV(PTransform):
    """ Transforms a PCollection of Dictionaries to a PCollection of CSV-formatted Strings

        column_order: A tuple or list specifying the name of fields to be formatted as csv, in order
        missing_val: The value to be written when a named field from `column_order` is not found in an input element
        discard_extras: (bool) Behavior when additional fields are found in the dictionary input element
        dialect: Delimiters, escape-characters, etc can be controlled by providing an instance of csv.Dialect

    """

    def __init__(self, column_order, missing_val='', discard_extras=True, dialect=excel):
        self._column_order = column_order
        self._missing_val = missing_val
        self._discard_extras = discard_extras
        self._dialect = dialect

    def expand(self, pcoll):
        return pcoll | ParDo(_DictToCSVFn(column_order=self._column_order,
                                          missing_val=self._missing_val,
                                          discard_extras=self._discard_extras,
                                          dialect=self._dialect)
                             )

要使用该示例,您可以将test_input放入PCollection中,并将DictToCSV PTransform应用于PCollection;否则,将其放入PCollection中.您可以将生成的转换后的PCollection用作WriteToText的输入.请注意,您必须通过column_order参数提供与字典输入元素的键相对应的列名列表或元组;结果CSV格式的字符串列将按照提供的列名的顺序排列.另外,该示例的基础实现不支持unicode.

To use the example, you would put your test_input into a PCollection, and apply the DictToCSV PTransform to the PCollection; you can take the resulting converted PCollection and use it as input for WriteToText. Note that you must provide a list or tuple of column names, via the column_order argument, corresponding to keys for your dictionary input elements; the resulting CSV-formatted string columns will be in the order of the column names provided. Also, the underlying implementation for the example does not support unicode.

这篇关于Google Cloud Dataflow从字典写入CSV的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆