如何在GCP数据流中使用python管道代码读取BigQuery表 [英] How to read BigQuery table using python pipeline code in GCP Dataflow
本文介绍了如何在GCP数据流中使用python管道代码读取BigQuery表的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
有人可以共享用python编写的针对GCP数据流的管道中的读/写bigquery表的语法
Could someone please share syntax to read/write bigquery table in a pipeline written in python for GCP Dataflow
推荐答案
在数据流上运行
首先,构造具有以下选项的Pipeline
使其在GCP DataFlow上运行:
First, construct a Pipeline
with the following options for it to run on GCP DataFlow:
import apache_beam as beam
options = {'project': <project>,
'runner': 'DataflowRunner',
'region': <region>,
'setup_file': <setup.py file>}
pipeline_options = beam.pipeline.PipelineOptions(flags=[], **options)
pipeline = beam.Pipeline(options = pipeline_options)
从BigQuery读取
在查询中定义BigQuerySource
,然后使用beam.io.Read
从BQ读取数据:
Define a BigQuerySource
with your query and use beam.io.Read
to read data from BQ:
BQ_source = beam.io.BigQuerySource(query = <query>)
BQ_data = pipeline | beam.io.Read(BQ_source)
写入BigQuery
有两种方法可以写入bigquery:
There are two options to write to bigquery:
-
使用
BigQuerySink
和beam.io.Write
:
BQ_sink = beam.io.BigQuerySink(<table>, dataset=<dataset>, project=<project>)
BQ_data | beam.io.Write(BQ_sink)
使用beam.io.WriteToBigQuery
:
BQ_data | beam.io.WriteToBigQuery(<table>, dataset=<dataset>, project=<project>)
这篇关于如何在GCP数据流中使用python管道代码读取BigQuery表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文