为什么我的数据流输出“超时值是负数"?插入BigQuery时? [英] Why does my Dataflow output "timeout value is negative" on insertion to BigQuery?

查看:130
本文介绍了为什么我的数据流输出“超时值是负数"?插入BigQuery时?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个由ReadSource,ParDo,Windowing,Insert(到BigQuery中按日期划分的表中)组成的Dataflow作业.

I have a Dataflow job consisting of ReadSource, ParDo, Windowing, Insert (into a date-partitioned table in BigQuery).

基本上是:

  1. 使用glob从Google存储桶中读取文本文件
  2. 通过分割定界符,在为每一列指定名称和数据类型之前更改一些值,然后再将其作为基于数据的时间戳记作为BigQuery表行输出,来处理每一行
  3. 使用第2步中的时间戳记的每日窗口中的窗口
  4. 使用Window表和"dataset $ datepartition"语法指定表和分区,以写入BigQuery.创建处置设置为CREATE_IF_NEEDED并将处置设置写入WRITE_APPEND.

前三个步骤似乎运行良好,但在大多数情况下,该作业在最后一个插入步骤中遇到了问题,该步骤在日志中给出了异常:

The first three steps seems to run fine but in most cases the job runs into problem on the last insert step which gives exceptions in the log:

java.lang.IllegalArgumentException: timeout value is negative at java.lang.Thread.sleep(Native Method) 
at com.google.cloud.dataflow.sdk.util.BigQueryTableInserter.insertAll(BigQueryTableInserter.java:287) 
at com.google.cloud.dataflow.sdk.io.BigQueryIO$StreamingWriteFn.flushRows(BigQueryIO.java:2446) 
at com.google.cloud.dataflow.sdk.io.BigQueryIO$StreamingWriteFn.finishBundle(BigQueryIO.java:2404) 
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.finishBundle(DoFnRunnerBase.java:158) 
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.finishBundle(SimpleParDoFn.java:196) 
at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.finishBundle(ForwardingParDoFn.java:47) 
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.finish(ParDoOperation.java:65) 
at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:80) 
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:287) 
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:223) 
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:173) 
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.doWork(DataflowWorkerHarness.java:193) 
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:173) 
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:160) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

此异常重复十次.

最后我得到工作流程失败",如下所示:

At last I get "workflow failed" as below:

Workflow failed. Causes: S04:Insert/DataflowPipelineRunner.BatchBigQueryIOWrite/BigQueryIO.StreamWithDeDup/Reshuffle/ 
GroupByKey/Read+Insert/DataflowPipelineRunner.BatchBigQueryIOWrite/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey/
GroupByWindow+Insert/DataflowPipelineRunner.BatchBigQueryIOWrite/BigQueryIO.StreamWithDeDup/Reshuffle/
ExpandIterable+Insert/DataflowPipelineRunner.BatchBigQueryIOWrite/BigQueryIO.StreamWithDeDup/ParDo(StreamingWrite)
 failed.

有时相同的工作和相同的输入可以毫无问题地工作,尽管这使得调试起来非常困难.那么从哪里开始呢?

Sometimes the same job with the same input works without problem though which makes this quite hard to debug. So where to start?

推荐答案

这是已知问题与DataQuery SDK for Java 1.7.0中的BigQueryIO流写入操作.它已在GitHub HEAD中修复,该修复将包含在Dataflow Java SDK 1.8.0版本中.

This is a known issue with the BigQueryIO streaming write operation in Dataflow SDK for Java 1.7.0. It is fixed in the GitHub HEAD and the fix will be included in the 1.8.0 release of the Dataflow Java SDK.

有关更多详细信息,请参见DataflowJavaSDK GitHub存储库上的第451号问题.

For more details, see Issue #451 on the DataflowJavaSDK GitHub repository.

这篇关于为什么我的数据流输出“超时值是负数"?插入BigQuery时?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆