如何使用Apache Beam从Google Pub/Sub访问消息ID? [英] How do you access the message id from Google Pub/Sub using Apache Beam?

查看:71
本文介绍了如何使用Apache Beam从Google Pub/Sub访问消息ID?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我一直在使用Python 2.7.16上的2.13.0 SDK测试Apache Beam,以流模式从Google Pub/Sub订阅中提取简单消息,并写入Google Big Query表.作为此操作的一部分,我正在尝试使用Pub/Sub消息ID进行重复数据删除,但是我似乎根本无法将其删除.

import time import json from datetime import datetime from google.cloud import pubsub_v1 project_id = "[YOUR PROJECT]" topic_name = "test-apache-beam" publisher = pubsub_v1.PublisherClient() topic_path = publisher.topic_path(project_id, topic_name) def callback(message_future): if message_future.exception(timeout=30): print ('Publishing message {} threw an Exception {}.'.format(topic_name, message_future.exception())) else: print(message_future.result()) for n in range(1,11): data = {'rownumber':n} jsondata = json.dumps(data) message_future = publisher.publish(topic_path, data=jsondata, source='python', timestamp=datetime.now().strftime("%Y-%b-%d (%H:%M:%S:%f)")) message_future.add_done_callback(callback) print('Published message IDs:')

Beam管道代码:-

 from __future__ import absolute_import

import argparse
import logging
import re
import json
import time
import datetime
import base64
import pprint

from past.builtins import unicode

import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import ReadFromPubSub
from apache_beam.io import ReadStringsFromPubSub
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.transforms.trigger import AfterProcessingTime
from apache_beam.transforms.trigger import AccumulationMode

def format_message_element(message, timestamp=beam.DoFn.TimestampParam):

    data = json.loads(message.data)
    attribs = message.attributes

    fullmessage = {'data' : data,
                   'attributes' : attribs,
                   'attribstring' : str(message.attributes)}

    return fullmessage

def run(argv=None):

    parser = argparse.ArgumentParser()
    input_group = parser.add_mutually_exclusive_group(required=True)
    input_group.add_argument(
                        '--input_subscription',
                        dest='input_subscription',
                        help=('Input PubSub subscription of the form '
                        '"projects/<PROJECT>/subscriptions/<SUBSCRIPTION>."'))
    input_group.add_argument(
                        '--test_input',
                        action="store_true",
                        default=False
    )
    group = parser.add_mutually_exclusive_group(required=True) 
    group.add_argument(
      '--output_table',
      dest='output_table',
      help=
      ('Output BigQuery table for results specified as: PROJECT:DATASET.TABLE '
       'or DATASET.TABLE.'))
    group.add_argument(
        '--output_file',
        dest='output_file',
        help='Output file to write results to.')
    known_args, pipeline_args = parser.parse_known_args(argv)

    options = PipelineOptions(pipeline_args)
    options.view_as(SetupOptions).save_main_session = True

    if known_args.input_subscription:
        options.view_as(StandardOptions).streaming=True

    with beam.Pipeline(options=options) as p:

        from apache_beam.io.gcp.internal.clients import bigquery

        table_schema = bigquery.TableSchema()

        attribfield = bigquery.TableFieldSchema()
        attribfield.name = 'attributes'
        attribfield.type = 'record'
        attribfield.mode = 'nullable'

        attribsource = bigquery.TableFieldSchema()
        attribsource.name = 'source'
        attribsource.type = 'string'
        attribsource.mode = 'nullable'

        attribtimestamp = bigquery.TableFieldSchema()
        attribtimestamp.name = 'timestamp'
        attribtimestamp.type = 'string'
        attribtimestamp.mode = 'nullable'

        attribfield.fields.append(attribsource)
        attribfield.fields.append(attribtimestamp)
        table_schema.fields.append(attribfield)

        datafield = bigquery.TableFieldSchema()
        datafield.name = 'data'
        datafield.type = 'record'
        datafield.mode = 'nullable'

        datanumberfield = bigquery.TableFieldSchema()
        datanumberfield.name = 'rownumber'
        datanumberfield.type = 'integer'
        datanumberfield.mode = 'nullable'
        datafield.fields.append(datanumberfield)
        table_schema.fields.append(datafield)

        attribstringfield = bigquery.TableFieldSchema()
        attribstringfield.name = 'attribstring'
        attribstringfield.type = 'string'
        attribstringfield.mode = 'nullable'
        table_schema.fields.append(attribstringfield)

        if known_args.input_subscription:
            messages = (p
            | 'Read From Pub Sub' >> ReadFromPubSub(subscription=known_args.input_subscription,with_attributes=True,id_label='message_id')
            | 'Format Message' >> beam.Map(format_message_element)
            )

            output = (messages | 'write' >> beam.io.WriteToBigQuery(
                        known_args.output_table,
                        schema=table_schema,
                        create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
                        write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND)
                    )

    result = p.run()
    result.wait_until_finish()

if __name__ == '__main__':
  logging.getLogger().setLevel(logging.INFO)
  run()
 

以及运行python脚本的代码:-

 python PythonTestMessageId.py --runner DataflowRunner --project [YOURPROJECT] --input_subscription projects/[YOURPROJECT]/subscriptions/test-apache-beam.subscription --output_table [YOURPROJECT]:test.newtest --temp_location gs://[YOURPROJECT]/tmp --job_name test-job
 

在提供的代码中,我只是将Attributes属性的字典转换为字符串,然后插入BigQuery表中.表中返回的数据如下所示:-

如您所见,attributes字段中的两个属性只是我传入的属性,而PubSub消息ID不可用.

有没有可以退回的方法?

解决方案

如下所示可能无法正常工作,并且已记录JIRA问题:documentation for the ReadFromPubSub method and PubSubMessage type suggests that service generated KVs such as id_label should be returned as part of the attributes property, however they do not appear to be returned.

Note that the id_label parameter is only supported when using the Dataflow runner.

Code to send a message

import time
import json
from datetime import datetime

from google.cloud import pubsub_v1

project_id = "[YOUR PROJECT]"
topic_name = "test-apache-beam"

publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(project_id, topic_name)

def callback(message_future):
    if message_future.exception(timeout=30):
        print ('Publishing message {} threw an Exception {}.'.format(topic_name, message_future.exception()))
    else:
        print(message_future.result())

for n in range(1,11):
    data = {'rownumber':n}
    jsondata = json.dumps(data)
    message_future = publisher.publish(topic_path, data=jsondata, source='python', timestamp=datetime.now().strftime("%Y-%b-%d (%H:%M:%S:%f)"))
    message_future.add_done_callback(callback)

print('Published message IDs:')

The Beam pipeline code:-

from __future__ import absolute_import

import argparse
import logging
import re
import json
import time
import datetime
import base64
import pprint

from past.builtins import unicode

import apache_beam as beam
from apache_beam.io import ReadFromText
from apache_beam.io import ReadFromPubSub
from apache_beam.io import ReadStringsFromPubSub
from apache_beam.io import WriteToText
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import StandardOptions
from apache_beam.transforms.trigger import AfterProcessingTime
from apache_beam.transforms.trigger import AccumulationMode

def format_message_element(message, timestamp=beam.DoFn.TimestampParam):

    data = json.loads(message.data)
    attribs = message.attributes

    fullmessage = {'data' : data,
                   'attributes' : attribs,
                   'attribstring' : str(message.attributes)}

    return fullmessage

def run(argv=None):

    parser = argparse.ArgumentParser()
    input_group = parser.add_mutually_exclusive_group(required=True)
    input_group.add_argument(
                        '--input_subscription',
                        dest='input_subscription',
                        help=('Input PubSub subscription of the form '
                        '"projects/<PROJECT>/subscriptions/<SUBSCRIPTION>."'))
    input_group.add_argument(
                        '--test_input',
                        action="store_true",
                        default=False
    )
    group = parser.add_mutually_exclusive_group(required=True) 
    group.add_argument(
      '--output_table',
      dest='output_table',
      help=
      ('Output BigQuery table for results specified as: PROJECT:DATASET.TABLE '
       'or DATASET.TABLE.'))
    group.add_argument(
        '--output_file',
        dest='output_file',
        help='Output file to write results to.')
    known_args, pipeline_args = parser.parse_known_args(argv)

    options = PipelineOptions(pipeline_args)
    options.view_as(SetupOptions).save_main_session = True

    if known_args.input_subscription:
        options.view_as(StandardOptions).streaming=True

    with beam.Pipeline(options=options) as p:

        from apache_beam.io.gcp.internal.clients import bigquery

        table_schema = bigquery.TableSchema()

        attribfield = bigquery.TableFieldSchema()
        attribfield.name = 'attributes'
        attribfield.type = 'record'
        attribfield.mode = 'nullable'

        attribsource = bigquery.TableFieldSchema()
        attribsource.name = 'source'
        attribsource.type = 'string'
        attribsource.mode = 'nullable'

        attribtimestamp = bigquery.TableFieldSchema()
        attribtimestamp.name = 'timestamp'
        attribtimestamp.type = 'string'
        attribtimestamp.mode = 'nullable'

        attribfield.fields.append(attribsource)
        attribfield.fields.append(attribtimestamp)
        table_schema.fields.append(attribfield)

        datafield = bigquery.TableFieldSchema()
        datafield.name = 'data'
        datafield.type = 'record'
        datafield.mode = 'nullable'

        datanumberfield = bigquery.TableFieldSchema()
        datanumberfield.name = 'rownumber'
        datanumberfield.type = 'integer'
        datanumberfield.mode = 'nullable'
        datafield.fields.append(datanumberfield)
        table_schema.fields.append(datafield)

        attribstringfield = bigquery.TableFieldSchema()
        attribstringfield.name = 'attribstring'
        attribstringfield.type = 'string'
        attribstringfield.mode = 'nullable'
        table_schema.fields.append(attribstringfield)

        if known_args.input_subscription:
            messages = (p
            | 'Read From Pub Sub' >> ReadFromPubSub(subscription=known_args.input_subscription,with_attributes=True,id_label='message_id')
            | 'Format Message' >> beam.Map(format_message_element)
            )

            output = (messages | 'write' >> beam.io.WriteToBigQuery(
                        known_args.output_table,
                        schema=table_schema,
                        create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
                        write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND)
                    )

    result = p.run()
    result.wait_until_finish()

if __name__ == '__main__':
  logging.getLogger().setLevel(logging.INFO)
  run()

And the code to run the python script:-

python PythonTestMessageId.py --runner DataflowRunner --project [YOURPROJECT] --input_subscription projects/[YOURPROJECT]/subscriptions/test-apache-beam.subscription --output_table [YOURPROJECT]:test.newtest --temp_location gs://[YOURPROJECT]/tmp --job_name test-job

In the code provided, I'm simply converting the dictionary of the Attributes property to a string, and inserting into a BigQuery table. The data returned in the table looks thus:-

As you can see, the two properties within the attributes field are simply those that I have passed in, and the PubSub message id is not available.

Is there a way this can be returned?

解决方案

Looks like this may not be working as intended, and a JIRA issue has been logged: https://issues.apache.org/jira/plugins/servlet/mobile#issue/BEAM-7819

这篇关于如何使用Apache Beam从Google Pub/Sub访问消息ID?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆