从aurora mysql发送消息到sqs [英] send messages from aurora mysql to sqs

查看:76
本文介绍了从aurora mysql发送消息到sqs的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我之间有两个 lambdas 和一个 SQS 队列.第一个lambda的目的是从 aurora MySQL 中选择产品ID,并将其发送到 SQS .有超过700万个产品ID.当第一个Lambda将这些产品ID发送到SQS时,我启用了一个触发器,该触发器调用了我的第二个Lambda.

I have two lambdas and an SQS queue in between. The first lambda's purpose is to pick product ids from aurora MySQL and send to SQS. There is over 7 million product ids. When the first lambda sends these product ids to SQS, I have enabled a trigger which invokes my second lambda.

我面临的问题是,由于lambda的时间限制,我的第一个lambda无法在一次调用中将所有产品ID发送到队列中.我对其进行了测试,对于1次调用,它只能向SQS发送10万条记录.如果我再次运行它,显然它将再次选择相同的产品ID.即使我在lambda中设置了一个限制和偏移量,但是在第一次调用后,我也必须更改偏移量以选择下一个100k记录,这还是有些乏味的.如何自动执行此过程?

The issue I am facing is that my first lambda is not able to send all product ids to queue in 1 invocation due to the time limits of lambda. I tested it and for 1 invocation it was able to send only 100k records to SQS. If I run it again obviously it will again pick the same product ids. Even if I put a limit and offset in my lambda then after 1st invocation I'll have to change offset to pick the next 100k records, this is a bit tedious. How can I automate this process?

推荐答案

您是否尝试过向s3写入一个csv文件,该文件存储了已发送给SQS的最新索引/产品编号,您最终将在下一个开始时访问该文件您的lambda迭代?

Have you tried writing to s3 a csv file that stores the latest index/productid you have sent to SQS, which you will eventually access at the start of the next iteration of your lambda?

以下是这些步骤的粗略实现:

Here's a rough implementation of the steps:

  1. 从s3加载最新的索引/产品ID
  2. [您执行的其他任何过程]
  3. 在s3上重写存储最新索引/产品ID的csv文件

这篇关于从aurora mysql发送消息到sqs的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆