bigquery DataFlow错误:在EU中进行读写时,无法在其他位置进行读写 [英] bigquery DataFlow Error: Cannot read and write in different locations while reading and writing in EU

查看:84
本文介绍了bigquery DataFlow错误:在EU中进行读写时,无法在其他位置进行读写的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个简单的Google DataFlow任务.它从BigQuery表中读取并写入另一个表中,就像这样:

I have a simple Google DataFlow task. It reads from a BigQuery table and writes into another, just like this:

(p
 |  beam.io.Read( beam.io.BigQuerySource(
        query='select dia, import from DS1.t_27k where true', 
        use_standard_sql=True))
 |  beam.io.Write(beam.io.BigQuerySink(
                  output_table,
                  dataset='DS1', 
                  project=project, 
                  schema='dia:DATE, import:FLOAT',
                  create_disposition=CREATE_IF_NEEDED,
                      write_disposition=WRITE_TRUNCATE
                     )
                )

我想问题是似乎该管道需要一个临时数据集来完成工作.而且我无法对此临时数据集强制定位.由于我的DS1位于欧盟(#EUROPE-WEST1),而临时数据集位于美国(我想),因此任务失败:

I guess issue is that it seems this pipeline needs a temporary dataset to make the work. And I'm not able to force location for this temp dataset. Because my DS1 is in EU (#EUROPE-WEST1) and temporary dataset is on US (I guess), the task fails:

WARNING:root:Dataset m-h-0000:temp_dataset_e433a0ef19e64100000000000001a does not exist so we will create it as temporary with location=None
WARNING:root:A task failed with exception.
 HttpError accessing <https://www.googleapis.com/bigquery/v2/projects/m-h-000000/queries/b8b2f00000000000000002bed336369d?alt=json&maxResults=10000>: response: <{'status': '400', 'content-length': '292', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'transfer-encoding': 'chunked', 'expires': 'Sat, 14 Oct 2017 20:29:15 GMT', 'vary': 'Origin, X-Origin', 'server': 'GSE', '-content-encoding': 'gzip', 'cache-control': 'private, max-age=0', 'date': 'Sat, 14 Oct 2017 20:29:15 GMT', 'x-frame-options': 'SAMEORIGIN', 'alt-svc': 'quic=":443"; ma=2592000; v="39,38,37,35"', 'content-type': 'application/json; charset=UTF-8'}>, content <{
 "error": {
  "errors": [
   {
    "domain": "global",
    "reason": "invalid",
    "message": "Cannot read and write in different locations: source: EU, destination: US"
   }
  ],
  "code": 400,
  "message": "Cannot read and write in different locations: source: EU, destination: US"
 }
}

管道选项:

options = PipelineOptions()

google_cloud_options = options.view_as(GoogleCloudOptions)
google_cloud_options.project = 'm-h'
google_cloud_options.job_name = 'myjob3'
google_cloud_options.staging_location = r'gs://p_df/staging'  #EUROPE-WEST1
google_cloud_options.region=r'europe-west1'
google_cloud_options.temp_location = r'gs://p_df/temp' #EUROPE-WEST1
options.view_as(StandardOptions).runner =   'DirectRunner'  #'DataflowRunner'

p = beam.Pipeline(options=options)

如何避免此错误?

通知错误仅在我以DirectRunner运行时出现.

Notice error only appears when I run it as DirectRunner.

推荐答案

Python DirectRunner中使用的BigQuerySource转换不会自动确定临时表的位置.有关问题,请参见 BEAM-1909 .

The BigQuerySource transform used in the Python DirectRunner doesn't automatically determine the locations for temp tables. See BEAM-1909 for the issue.

使用DataflowRunner时,它应该可以工作.

When using the DataflowRunner this should work.

这篇关于bigquery DataFlow错误:在EU中进行读写时,无法在其他位置进行读写的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆