如何从Kafka收听正确的ACK消息 [英] How to listen for the right ACK message from Kafka

查看:37
本文介绍了如何从Kafka收听正确的ACK消息的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Spring Boot&做 POC Kafka用于一个交易项目,我对此有以下疑问:

I am doing a POC with Spring Boot & Kafka for a transactional project and I have the following doubt:

场景:一个微服务 MSPUB1 接收来自客户的请求.该请求在Kafka上发布有关主题 TRANSACTION_TOPIC1 的消息,但微服务可以并行接收多个请求.微服务会监听主题 TRANSACTION_RESULT1 来检查交易是否完成.

Scenario: One microservices MSPUB1 receives Requests from the customer. That requests publish a message on topic TRANSACTION_TOPIC1 on Kafka but that Microservice could receive multiple requests in parallel. The Microservice listens the topic TRANSACTION_RESULT1 to check that the transaction finished.

在流媒体平台的另一侧,另一个微服务 MSSUB1 正在侦听主题 TRANSACTION_TOPIC1 并处理所有消息并将结果发布在: TRANSACTION_RESULT1

In the other side of the Streaming Platform, another Microservice MSSUB1 is listening the topic TRANSACTION_TOPIC1 and process all messages and publish the results on: TRANSACTION_RESULT1

MSPUB1 了解主题 TRANSACTION_RESULT1 上的消息是否与他的原始请求匹配的最佳方法是什么?微服务 MSPUB1 可以具有在初始主题 TRANSACTION_TOPIC1 上发布的任何消息的ID,并将其移至 TRANSACTION_RESULT1

What is the best way from MSPUB1 to know if the message on topic TRANSACTION_RESULT1 matches with his original request? The microservice MSPUB1 could have a ID for any message published on the initial topic TRANSACTION_TOPIC1 and be moved to TRANSACTION_RESULT1

问题:在读取分区时,您移动了指针,但是在有多个请求的并发环境中,如何检查主题 TRANSACTION_RESULT1 上的消息是否正确?

Question: When you are reading the partition, you move the pointer but in a concurrency environment with multiple requests, how to check if the message on the topic TRANSACTION_RESULT1 is the expected?

非常感谢

胡安·安东尼奥

推荐答案

一种方法是使用

One way to do it is to use a Spring Integration BarrierMessageHandler.

这是一个示例应用程序.希望这是不言自明的.需要Kafka 0.11或更高版本...

Here is an example app. Hopefully, it's self-explanatory. Kafka 0.11 or higher is needed...

@SpringBootApplication
@RestController
public class So48349993Application {

    private static final Logger logger = LoggerFactory.getLogger(So48349993Application.class);

    private static final String TRANSACTION_TOPIC1 = "TRANSACTION_TOPIC1";

    private static final String TRANSACTION_RESULT1 = "TRANSACTION_RESULT1";

    public static void main(String[] args) {
        SpringApplication.run(So48349993Application.class, args);
    }

    private final Exchanger exchanger;

    private final KafkaTemplate<String, String> kafkaTemplate;

    @Autowired
    public So48349993Application(Exchanger exchanger,
            KafkaTemplate<String, String> kafkaTemplate) {
        this.exchanger = exchanger;
        this.kafkaTemplate = kafkaTemplate;
        kafkaTemplate.setDefaultTopic(TRANSACTION_RESULT1);
    }

    @RequestMapping(path = "/foo/{id}/{other}", method = RequestMethod.GET)
    @ResponseBody
    public String foo(@PathVariable String id, @PathVariable String other) {
        logger.info("Controller received: " + other);
        String reply = this.exchanger.exchange(id, other);
        // if reply is null, we timed out
        logger.info("Controller replying: " + reply);
        return reply;
    }

    // Client side

    @MessagingGateway(defaultRequestChannel = "outbound", defaultReplyTimeout = "10000")
    public interface Exchanger {

        @Gateway
        String exchange(@Header(IntegrationMessageHeaderAccessor.CORRELATION_ID) String id,
                @Payload String out);

    }

    @Bean
    public IntegrationFlow router() {
        return IntegrationFlows.from("outbound")
                .routeToRecipients(r -> r
                        .recipient("toKafka")
                        .recipient("barrierChannel"))
                .get();
    }

    @Bean
    public IntegrationFlow outFlow(KafkaTemplate<String, String> kafkaTemplate) {
        return IntegrationFlows.from("toKafka")
                .handle(Kafka.outboundChannelAdapter(kafkaTemplate).topic(TRANSACTION_TOPIC1))
                .get();
    }

    @Bean
    public IntegrationFlow barrierFlow(BarrierMessageHandler barrier) {
        return IntegrationFlows.from("barrierChannel")
                .handle(barrier)
                .transform("payload.get(1)") // payload is list with input/reply
                .get();
    }

    @Bean
    public BarrierMessageHandler barrier() {
        return new BarrierMessageHandler(10_000L);
    }

    @KafkaListener(id = "clientReply", topics = TRANSACTION_RESULT1)
    public void result(Message<?> reply) {
        logger.info("Received reply: " + reply.getPayload() + " for id "
                + reply.getHeaders().get(IntegrationMessageHeaderAccessor.CORRELATION_ID));
        barrier().trigger(reply);
    }

    // Server side

    @KafkaListener(id = "server", topics = TRANSACTION_TOPIC1)
    public void service(String in,
            @Header(IntegrationMessageHeaderAccessor.CORRELATION_ID) String id) throws InterruptedException {
        logger.info("Service Received " + in);
        Thread.sleep(5_000);
        logger.info("Service Replying to " + in);
        // with spring-kafka 2.0 (and Boot 2), you can return a String and use @SendTo instead of this.
        this.kafkaTemplate.send(new GenericMessage<>("reply for " + in,
                Collections.singletonMap(IntegrationMessageHeaderAccessor.CORRELATION_ID, id)));
    }

    // Provision topics if needed

    // provided by Boot in 2.0
    @Bean
    public KafkaAdmin admin() {
        Map<String, Object> config = new HashMap<>();
        config.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        return new KafkaAdmin(config);
    }

    @Bean
    public NewTopic topic1() {
        return new NewTopic(TRANSACTION_TOPIC1, 10, (short) 1);
    }

    @Bean
    public NewTopic result1() {
        return new NewTopic(TRANSACTION_RESULT1, 10, (short) 1);
    }

}

结果

2018-01-20 17:27:54.668  INFO 98522 --- [   server-1-C-1] com.example.So48349993Application        : Service Received foo
2018-01-20 17:27:55.782  INFO 98522 --- [nio-8080-exec-2] com.example.So48349993Application        : Controller received: bar
2018-01-20 17:27:55.788  INFO 98522 --- [   server-0-C-1] com.example.So48349993Application        : Service Received bar
2018-01-20 17:27:59.673  INFO 98522 --- [   server-1-C-1] com.example.So48349993Application        : Service Replying to foo
2018-01-20 17:27:59.702  INFO 98522 --- [ientReply-1-C-1] com.example.So48349993Application        : Received reply: reply for foo for id 1
2018-01-20 17:27:59.705  INFO 98522 --- [nio-8080-exec-1] com.example.So48349993Application        : Controller replying: reply for foo
2018-01-20 17:28:00.792  INFO 98522 --- [   server-0-C-1] com.example.So48349993Application        : Service Replying to bar
2018-01-20 17:28:00.798  INFO 98522 --- [ientReply-0-C-1] com.example.So48349993Application        : Received reply: reply for bar for id 2
2018-01-20 17:28:00.800  INFO 98522 --- [nio-8080-exec-2] com.example.So48349993Application        : Controller replying: reply for bar

Pom

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.example</groupId>
    <artifactId>so48349993</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>so48349993</name>
    <description>Demo project for Spring Boot</description>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.9.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-integration</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.integration</groupId>
            <artifactId>spring-integration-kafka</artifactId>
            <version>2.3.0.RELEASE</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>1.3.2.RELEASE</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>


</project>

application.properties

application.properties

spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.concurrency=2

编辑

这是在服务器端使用Spring Integration的版本,而不是 @KafkaListener ...

And here is a version that uses Spring Integration on the server side, instead of @KafkaListener...

@SpringBootApplication
@RestController
public class So483499931Application {

    private static final Logger logger = LoggerFactory.getLogger(So483499931Application.class);

    private static final String TRANSACTION_TOPIC1 = "TRANSACTION_TOPIC3";

    private static final String TRANSACTION_RESULT1 = "TRANSACTION_RESULT3";

    public static void main(String[] args) {
        SpringApplication.run(So483499931Application.class, args);
    }

    private final Exchanger exchanger;

    private final KafkaTemplate<String, String> kafkaTemplate;

    @Autowired
    public So483499931Application(Exchanger exchanger,
            KafkaTemplate<String, String> kafkaTemplate) {
        this.exchanger = exchanger;
        this.kafkaTemplate = kafkaTemplate;
        kafkaTemplate.setDefaultTopic(TRANSACTION_RESULT1);
    }

    @RequestMapping(path = "/foo/{id}/{other}", method = RequestMethod.GET)
    @ResponseBody
    public String foo(@PathVariable String id, @PathVariable String other) {
        logger.info("Controller received: " + other);
        String reply = this.exchanger.exchange(id, other);
        logger.info("Controller replying: " + reply);
        return reply;
    }

    // Client side

    @MessagingGateway(defaultRequestChannel = "outbound", defaultReplyTimeout = "10000")
    public interface Exchanger {

        @Gateway
        String exchange(@Header(IntegrationMessageHeaderAccessor.CORRELATION_ID) String id,
                @Payload String out);

    }

    @Bean
    public IntegrationFlow router() {
        return IntegrationFlows.from("outbound")
                .routeToRecipients(r -> r
                        .recipient("toKafka")
                        .recipient("barrierChannel"))
                .get();
    }

    @Bean
    public IntegrationFlow outFlow(KafkaTemplate<String, String> kafkaTemplate) {
        return IntegrationFlows.from("toKafka")
                .handle(Kafka.outboundChannelAdapter(kafkaTemplate).topic(TRANSACTION_TOPIC1))
                .get();
    }

    @Bean
    public IntegrationFlow barrierFlow(BarrierMessageHandler barrier) {
        return IntegrationFlows.from("barrierChannel")
                .handle(barrier)
                .transform("payload.get(1)") // payload is list with input/reply
                .get();
    }

    @Bean
    public BarrierMessageHandler barrier() {
        return new BarrierMessageHandler(10_000L);
    }

    @KafkaListener(id = "clientReply", topics = TRANSACTION_RESULT1)
    public void result(Message<?> reply) {
        logger.info("Received reply: " + reply.getPayload() + " for id "
                + reply.getHeaders().get(IntegrationMessageHeaderAccessor.CORRELATION_ID));
        barrier().trigger(reply);
    }

    // Server side

    @Bean
    public IntegrationFlow server(ConsumerFactory<String, String> consumerFactory,
            KafkaTemplate<String, String> kafkaTemplate) {
        return IntegrationFlows.from(Kafka.messageDrivenChannelAdapter(consumerFactory, TRANSACTION_TOPIC1))
            .handle("so483499931Application", "service")
            .handle(Kafka.outboundChannelAdapter(kafkaTemplate).topic(TRANSACTION_RESULT1))
            .get();
    }

    public String service(String in) throws InterruptedException {
        logger.info("Service Received " + in);
        Thread.sleep(5_000);
        logger.info("Service Replying to " + in);
        return "reply for " + in;
    }

    // Provision topics if needed

    // provided by Boot in 2.0
    @Bean
    public KafkaAdmin admin() {
        Map<String, Object> config = new HashMap<>();
        config.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        return new KafkaAdmin(config);
    }

    @Bean
    public NewTopic topic1() {
        return new NewTopic(TRANSACTION_TOPIC1, 10, (short) 1);
    }

    @Bean
    public NewTopic result1() {
        return new NewTopic(TRANSACTION_RESULT1, 10, (short) 1);
    }

}

spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.concurrency=2
spring.kafka.consumer.group-id=server

这篇关于如何从Kafka收听正确的ACK消息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆