使用 Spring-cloud-stream-kafka-stream 的活页夹问题 [英] Issues with binder using Spring-cloud-stream-kafka-stream

查看:41
本文介绍了使用 Spring-cloud-stream-kafka-stream 的活页夹问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 spring 云流 kafka 流读取 kafka.然后我在一分钟的时间窗口内聚合事件并将其写入不同的主题.然后我需要从主题中读取聚合事件并将其写入不同的主题,同时将主题与另一个 kafka 集群中的不同主题绑定.但我得到了以下活页夹异常.

I m trying to read kafka using spring cloud stream kafka stream. then I aggregate the event in one min time window and wite it to the differnt topic. Then I need to read the aggregated event from the topic and write it to a different topic while binding the topic with the different topic in another kafka cluster. But I m getting the below binder exception.

org.springframework.context.ApplicationContextException: Failed to start bean 'outputBindingLifecycle'; nested exception is java.lang.IllegalStateException: The binder 'kafkaha' cannot bind a com.sun.proxy.$Proxy155
    at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:185)
    at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53)
    at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360)
    at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158)
    at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122)
    at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:893)
    at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:163)
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552)
    at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142)
    at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:775)
    at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397)
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:316)
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260)
    at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248)
    at com.expediagroup.platform.StreamingApplication.main(StreamingApplication.java:11)
Caused by: java.lang.IllegalStateException: The binder 'kafkaha' cannot bind a com.sun.proxy.$Proxy155
    at org.springframework.util.Assert.state(Assert.java:73)
    at org.springframework.cloud.stream.binder.DefaultBinderFactory.doGetBinder(DefaultBinderFactory.java:194)
    at org.springframework.cloud.stream.binder.DefaultBinderFactory.getBinder(DefaultBinderFactory.java:130)
    at org.springframework.cloud.stream.binding.BindingService.getBinder(BindingService.java:337)
    at org.springframework.cloud.stream.binding.BindingService.bindProducer(BindingService.java:229)
    at org.springframework.cloud.stream.binding.BindableProxyFactory.createAndBindOutputs(BindableProxyFactory.java:287)
    at org.springframework.cloud.stream.binding.OutputBindingLifecycle.doStartWithBindable(OutputBindingLifecycle.java:58)
    at java.util.LinkedHashMap$LinkedValues.forEach(LinkedHashMap.java:608)
    at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.start(AbstractBindingLifecycle.java:48)
    at org.springframework.cloud.stream.binding.OutputBindingLifecycle.start(OutputBindingLifecycle.java:34)
    at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
    ... 14 common frames omitted

我遵循了 链接 并尝试了以下代码.

I followed the example in the link and tried below code.

application.properties

spring:
  applicaiton.name: eg-destination-attribute-store-ha-search-stream
  cloud:
    consul:
      host: localhost
      port: 8500
      discovery:
        instanceId: eg-destination-attribute-store-ha-search-stream
    stream:
      kafka:
        streams:
          timeWindow:
            length: 60000
            advanceBy: 60000
          bindings:
            inputKstream:
              consumer:
                autoCommitOffset: true
                startOffset: earliest
                keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
                valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
            bridge:
              producer:
                keySerde: org.apache.kafka.common.serialization.Serdes$StringSerde
                valueSerde: io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde
          binder:
            brokers: kafka.us-east-1.stage.kafka.away.black:9092
            configuration:
              schema.registry.url: http://kafka-schema-registry.us-east-1.stage.kafka.away.black:8081
              commit.interval.ms: 1000
              application.id: eg-test-dev #its a random id to be identified uniquly
            autoAddPartitions: false
            minPartitionCount: 1
            num:
              stream:
                threads: 1
      bindings:
        inputKstream:
          destination: business-events-search-event
          binder: kafkaha
          group: grp-eg-destination-attribute-store-ha-search-stream-ha
          consumer:
            useNativeDecoding: true
        bridge:
          destination: business-events-search-event-agg
          binder: kafkaha
          #group: grp-eg-destination-attribute-store-ha-search-stream
          consumer:
            useNativeDecoding: true
        output:
          destination: business-events-search-event-agg
          binder: kafkaha
          group: grp-eg-destination-attribute-store-ha-search-stream-eg-in
          consumer:
            useNativeDecoding: true
        input:
          destination: business-events-search-event-eg
          binder: kafkaeg
          group: grp-eg-destination-attribute-store-ha-search-stream-eg
          consumer:
            useNativeDecoding: true

      binders:
        kafkaha:
          type: kafka
          environment:
            spring:
              cloud:
                stream:
                  kafka:
                    binder:
                      brokers: kafka.us-east-1.stage.kafka.away.black:9092
        kafkaeg:
          type: kafka
          environment:
            spring:
              cloud:
                stream:
                  kafka:
                    binder:
                      brokers: localhost:9092

ExecutorHaAgg.java


@Slf4j
@EnableBinding(EgSrcSinkProcessor.class)
public class ExecutorHaAgg {

    @Value("${spring.cloud.stream.kafka.streams.binder.configuration.schema.registry.url}")
    private String schemaRegistryUrl;

    @Autowired
    private LookNPersistService service;

    @Autowired
    private TimeWindows timeWindows;

    @Timed(value = "kstream.BusinessModelMaskActiveLogV2.process.time", percentiles = {0.5, 0.9, 0.99}, histogram = true)
    @StreamListener
    @SendTo("bridge")
    public KStream<Windowed<String>, ResultValue> process(@Input("inputKstream") KStream<String, SearchBusinessEvent> inputKstream) {

        final Map<String, String> schemaMap = Collections.singletonMap(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryUrl);
        final SpecificAvroSerde<SearchBusinessEvent> searchBussinessEventSerde = new SpecificAvroSerde<>();
        searchBussinessEventSerde.configure(schemaMap, false);
        TransformedValueSerde transformedValueSerde = new TransformedValueSerde();
        ResultValueSerde resultValueSerde = new ResultValueSerde();
        return inputKstream
                .filter((k, v) -> (v.getVisitorUuid() != null && v.getSearchTermUUIDs() != null && v.getSearchTermUUIDs().size() > 0))
                .map((k, v) -> KeyValue.pair(StringUtil.getSearchTermFromUri(v.getSearchTermUUIDs().get(0)), new TransformedValue(v.getAvailabilityStart(), v.getAvailabilityEnd(), v.getHeader().getTime())))
                .groupBy((k, v) -> k, Serialized.with(Serdes.String(), transformedValueSerde))
                .windowedBy(timeWindows)
                .aggregate(ResultValue::new, ((key, value, aggregate) -> {
                    aggregate.setSearchTerm(key);
                    aggregate.setTime((aggregate.getTime() < value.getTime()) ? value.getTime() : aggregate.getTime());
                    aggregate.setDatedCount(StringUtil.isDatedStrNullAndEmpty(value.getStartDate(), value.getEndDate()) ? aggregate.getDatedCount() : 1 + aggregate.getDatedCount());
                    aggregate.setCount(1 + aggregate.getCount());
                    return aggregate;
                }), Materialized.with(Serdes.String(), resultValueSerde)).toStream();
    }
}

Transporter.java


@Slf4j
@EnableBinding(Processor.class)
public class Transporter {

    @StreamListener(Processor.INPUT)
    @SendTo(Processor.OUTPUT)
    public Object transfer(Object object){
        return object;
    }
}

EgSrcSinkProcessor.java



public interface EgSrcSinkProcessor {

    @Input("inputKstream")
    KStream<?, ?> inputKstream();

    @Output("bridge")
    KStream<?, ?> bridgeKstream();
}

推荐答案

我在尝试将 MessageChannelKStream 混合到相同的绑定时遇到了同样的问题.您的 inputKstream 应该绑定到 kstream 类型.我的如下:

I had the same issue when trying to mix MessageChannel and KStream to the same bindings. Your inputKstream should bind to kstream type. Mine is as follow:

management.endpoints.web.exposure.include=*
spring.profiles=kafka
spring.cloud.stream.bindings.output.binder=kafka1
spring.cloud.stream.bindings.output.destination=board-events
spring.cloud.stream.bindings.output.contentType=application/json
spring.cloud.stream.bindings.output.producer.header-mode=none
spring.cloud.stream.bindings.input.binder=kstream1
spring.cloud.stream.bindings.input.destination=board-events
spring.cloud.stream.bindings.input.contentType=application/json
spring.cloud.stream.bindings.input.group=command-board-events-group
spring.cloud.stream.bindings.input.consumer.useNativeDecoding=true
spring.cloud.stream.bindings.input.consumer.header-mode=none
spring.cloud.stream.kafka.streams.binder.brokers=localhost
spring.cloud.stream.default-binder=kafka1
spring.cloud.stream.binders.kafka1.type=kafka
spring.cloud.stream.binders.kafka1.environment.spring.cloud.stream.kafka.streams.binder.brokers=localhost
spring.cloud.stream.binders.kstream1.type=kstream
spring.cloud.stream.binders.kstream1.environment.spring.cloud.stream.kafka.streams.binder.brokers=localhost

这篇关于使用 Spring-cloud-stream-kafka-stream 的活页夹问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆