使用 Kafka 主题后发送 HTTP 响应 [英] Sending HTTP response after consuming a Kafka topic

查看:50
本文介绍了使用 Kafka 主题后发送 HTTP 响应的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在编写一个包含大量微服务的 Web 应用程序.我目前正在探索如何在所有这些服务之间正确通信,我决定坚持使用消息总线,或者更具体地说是 Apache Kafka.

I’m currently writing a web application that has a bunch of microservices. I’m currently exploring how to properly communicate between all these services and I’ve decided to stick with a message bus, or more specifically Apache Kafka.

但是,我有几个问题,我不确定如何从概念上解决.我使用 API 网关服务作为应用程序的主要入口.它充当将操作转发到适用微服务的主要代理.考虑以下场景:

However, I have a few questions that I’m not sure how to conceptually get around. I’m using an API Gateway-service as the main entry to the application. It acts as the main proxy to forward operations to the applicable microservices. Consider the following scenario:

  1. 用户向 API 网关发送包含一些信息的 POST 请求.
  2. 网关生成一条新消息并将其发布到 Kafka 主题.
  3. 订阅的微服务接收主题中的消息并处理数据.

那么,我现在应该如何从网关响应客户端?如果我需要来自该微服务的一些数据怎么办?感觉 HTTP 请求可能会超时.我应该坚持在客户端和 API 网关之间使用 websockets 吗?

So, how am I now supposed to respond to the client from the Gateway? What if I need some data from that microservice? Feels like that HTTP request could timeout. Should I stick with websockets between the client and API Gateway instead?

此外,如果客户端发送一个 GET 请求来获取一些数据,我应该如何使用 Kafka 来解决这个问题?

And also, if the client sends a GET request to fetch some data, how am I supposed to approach that using Kafka?

谢谢.

推荐答案

假设您要创建一个订单.它应该是这样工作的:

Let's say you're going to create an order. This is how it should work:

  1. 传统上,我们曾经在 RDBMS 表中使用自增字段或序列来创建订单 ID.但是,这意味着在我们将订单保存在数据库中之前,不会生成订单 ID.现在,在 Kafka 中写入数据时,我们不会立即写入数据库,并且 Kafka 无法生成订单 ID.因此,您需要使用一些可扩展的 id 生成实用程序,例如 Twitter Snowflake 或类似架构的东西,以便您甚至可以在将订单写入 Kafka 之前生成订单 id

  1. Traditionally we used to have an auto-increment field or a sequence in the RDBMS table to create an order id. However, this means order id is not generated until we save the order in DB. Now, when writing data in Kafka, we're not immediately writing to the DB and Kafka cannot generate order id. Hence you need to use some scalable id generation utility like Twitter Snowflake or something with the similar architecture so that you can generate an order id even before writing the order in Kafka

获得订单 ID 后,以原子方式(全有或全无)在 Kafka 主题上编写单个事件消息.成功完成后,您可以向客户端发送成功响应.在此阶段不要写入多个主题,因为写入多个主题会失去原子性.您始终可以有多个消费者组将事件写入多个其他主题.一个消费者组应该将数据写入某个持久化数据库中以供查询

Once you have the order id, write a single event message on Kafka topic atomically (all-or-nothing). Once this is successfully done, you can send back a success response to the client. Do not write to multiple topics at this stage as you'll lose atomicity by writing to multiple topics. You can always have multiple consumer groups that write the event to multiple other topics. One consumer group should write the data in some persistent DB for querying

您现在需要解决 read-your-own-write 问题,即在收到用户想要查看订单的成功响应后立即.但是您的数据库可能尚未使用订单数据进行更新.为此,请在将订单数据写入 Kafka 后,返回成功响应之前,立即将订单数据写入分布式缓存,如 Redis 或 Memcached.当用户读取订单时,返回缓存的数据

You now need to address the read-your-own-write i.e. immediately after receiving success response the user would want to see the order. But your DB is probably not yet updated with the order data. To acheive this, write the order data to a distributed cache like Redis or Memcached immediately after writing the order data to Kafka and before returning the success response. When the user reads the order, the cached data is returned

现在您需要使用最新的订单状态更新缓存.你总是可以让 Kafka 消费者从 Kafka 主题中读取订单状态

Now you need to keep the cache updated with the latest order status. That you can always do with a Kafka consumer reading the order status from a Kafka topic

确保您不需要将所有订单都保存在缓存中.您可以基于 LRU 驱逐数据.如果在读取订单时,数据不在缓存中,则会从数据库中读取并写入缓存以备将来请求

To ensure that you don't need to keep all orders in cache memory. You can evict data based on LRU. If while reading an order, the data is not on cache, it will be read from the DB and written to the cache for future requests

最后,如果您想确保订购的商品是为订单预留的,以便其他人无法使用,例如预订航班座位或一本书的最后副本,您需要一个共识算法.您可以为此使用 Apache Zookeeper 并在该项目上创建分布式锁

Finally, if you want to ensure that the ordered item is reserved for the order so that no one else can take, like booking a flight seat, or the last copy of a book, you need a consensus algorithm. You can use Apache Zookeeper for that and create a distribured lock on the item

这篇关于使用 Kafka 主题后发送 HTTP 响应的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆