org.apache.kafka.common.KafkaException:无法构建 kafka 消费者 [英] org.apache.kafka.common.KafkaException: Failed to construct kafka consumer

查看:449
本文介绍了org.apache.kafka.common.KafkaException:无法构建 kafka 消费者的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我手动启动 Zookeeper,然后是 Kafka 服务器,最后是带有各自属性文件的 Kafka-Rest 服务器.接下来,我将在 tomcat 上部署我的 Spring Boot 应用程序

在 Tomcat 日志跟踪中,我收到错误 org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry';嵌套异常是 org.apache.kafka.common.KafkaException:无法构建 kafka 消费者 并且我的应用程序无法启动

<块引用>

错误日志

25-Dec-2017 15:00:32.508 SEVERE [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild:开始:org.apache.catalina.LifecycleException: 无法启动组件 [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/spring-kafka-webhook-service-0.0.1-SNAPSHOT]]在 org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167)在 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)在 org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)在 org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)在 org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:986)在 org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1857)在 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)在 java.util.concurrent.FutureTask.run(FutureTask.java:266)在 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)在 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)在 java.lang.Thread.run(Thread.java:748)引起:org.springframework.context.ApplicationContextException:无法启动bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry';嵌套异常是 org.apache.kafka.common.KafkaException:无法构建 kafka 消费者在 org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178)在 org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:50)在 org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:348)在 org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:151)在 org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:114)在 org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:880)在 org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:144)在 org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:546)在 org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)在 org.springframework.boot.SpringApplication.refresh(SpringApplication.java:693)在 org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:360)在 org.springframework.boot.SpringApplication.run(SpringApplication.java:303)在 org.springframework.boot.web.support.SpringBootServletInitializer.run(SpringBootServletInitializer.java:154)在 org.springframework.boot.web.support.SpringBootServletInitializer.createRootApplicationContext(SpringBootServletInitializer.java:134)在 org.springframework.boot.web.support.SpringBootServletInitializer.onStartup(SpringBootServletInitializer.java:87)在 org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:169)在 org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5196)在 org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)……还有 10 个引起:org.apache.kafka.common.KafkaException:无法构建kafka消费者在 org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:702)在 org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:557)在 org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:73)在 org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:69)在 org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:305)在 org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:230)在 org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:180)在 org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:202)在 org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:126)在 org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:202)在 org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:287)在 org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:236)在 org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:175)……还有 27 个引起:java.lang.NoClassDefFoundError: org/apache/kafka/common/ClusterResourceListener在 java.lang.ClassLoader.defineClass1(Native Method)在 java.lang.ClassLoader.defineClass(ClassLoader.java:763)在 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)在 org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(WebappClassLoaderBase.java:2283)在 org.apache.catalina.loader.WebappClassLoaderBase.findClass(WebappClassLoaderBase.java:811)在 org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1260)在 org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1119)在 java.lang.Class.forName0(Native Method)在 java.lang.Class.forName(Class.java:348)在 org.apache.kafka.common.utils.Utils.newInstance(Utils.java:332)在 org.apache.kafka.common.config.AbstractConfig.getConfiguredInstances(AbstractConfig.java:225)在 org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:643)... 39 更多引起:java.lang.ClassNotFoundException:org.apache.kafka.common.ClusterResourceListener在 org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1291)在 org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1119)……还有 51 个

<块引用>

接收器类

公共类 InventoryEventReceiver {私有静态最终记录器日志 = LoggerFactory.getLogger(InventoryEventReceiver.class);私有 CountDownLatch 锁存器 = 新 CountDownLatch(1);公共 CountDownLatch getLatch() {返回闩锁;}@KafkaListener(topics="inventory", containerFactory="kafkaListenerContainerFactory")public void listenWithHeaders(@Payload InventoryEvent 事件,@Header(KafkaHeaders.RECEIVED_TOPIC) 字符串主题,@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) 整数键,@Header(KafkaHeaders.RECEIVED_PARTITION_ID) int 分区,@Header(KafkaHeaders.OFFSET) 字符串偏移){System.out.println("事件已被listenWithHeaders(InventoryEvent)接收");System.out.println(event.toString());log.info(System.currentTimeMillis() + "-- Received Event :\"" + event + "\" from partition:offset -- " + partition + ":" + offset +" 主题:" + 主题);String urlForInventoryListeners = "http://localhost:8080/" + topic + "/listeners";输出流操作系统 = 空;尝试 {URL objectUrl = 新 URL(urlForInventoryListeners);HttpURLConnection con = (HttpURLConnection) objectUrl.openConnection();con.setRequestMethod("POST");con.setRequestProperty("Content-Type", "application/json; charset=UTF-8");con.setRequestProperty("topic", topic);Gson gson = new Gson();String eventJson = gson.toJson(event);con.setDoOutput(true);os = con.getOutputStream();os.write(eventJson.getBytes("UTF-8"));System.out.println("事件发送到" + objectUrl);} 捕获(异常 e){e.printStackTrace();System.out.println(e.getMessage());} 最后 {尝试 {os.close();} catch (IOException e) {e.printStackTrace();System.out.println(e.getMessage());}}闩锁.countDown();}}

<块引用>

接收器配置类

@Configuration@EnableKafka公共类 InventoryReceiverConfig {@自动连线私人 KafkaConfig kafkaConfig;@豆角,扁豆public static ConsumerFactory消费者工厂(){return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),新的 JsonDeserializer<>(InventoryEvent.class));}@豆角,扁豆public static ConcurrentKafkaListenerContainerFactorykafkaListenerContainerFactory() {ConcurrentKafkaListenerContainerFactorycontainerFactory = new ConcurrentKafkaListenerContainerFactory<>();containerFactory.setConsumerFactory(consumerFactory());containerFactory.setConcurrency(3);containerFactory.getContainerProperties().setPollTimeout(3000);返回容器工厂;}@豆角,扁豆public static Map消费者配置(){映射<字符串,对象>consumerProps = new HashMap<>();consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG,"inventory_consumers");consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class);consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,JsonDeserializer.class);consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor");返回消费者道具;}@豆角,扁豆公共库存事件接收器接收器(){返回新的 InventoryEventReceiver();}}

而我的 server.properties、consumer.properties 和 kafka-rest.properties 的集群属性文件如下:

<块引用>

server.properties

#broker id.对于每个代理,这必须设置为唯一的整数.经纪人.id=0# 切换是否启用主题删除,默认值为falsedelete.topic.enable=true############################ 套接字服务器设置############################## 套接字服务器监听的地址.它将获得从# java.net.InetAddress.getCanonicalHostName() 如果未配置.#   格式:# listeners = listener_name://host_name:port#   例子:# listeners = PLAINTEXT://your.host.name:9092听众=PLAINTEXT://:9092# 代理将向生产者和消费者通告的主机名和端口.如果没有设置,# 如果已配置,它将使用侦听器"的值.否则,它将使用该值# 从 java.net.InetAddress.getCanonicalHostName() 返回.Adverted.listeners=PLAINTEXT://localhost:9092# 服务器用于接收来自网络的请求并向网络发送响应的线程数num.network.threads=3# 服务器用于处理请求的线程数,可能包括磁盘 I/Onum.io.threads=8# 套接字服务器使用的发送缓冲区(SO_SNDBUF)socket.send.buffer.bytes=102400# socket服务器使用的接收缓冲区(SO_RCVBUF)socket.receive.buffer.bytes=102400# 套接字服务器将接受的请求的最大大小(防止 OOM)socket.request.max.bytes=104857600# 逗号分隔的目录列表,用于存储日志文件log.dirs=/tmp/kafka-logs# 每个主题的默认日志分区数.更多的分区允许更大的# 消费的并行性,但这也会导致更多的文件跨#经纪人.num.partitions=1# 每个数据目录的线程数用于启动时的日志恢复和关闭时的刷新.# 对于数据目录位于 RAID 阵列的安装,建议增加此值.num.recovery.threads.per.data.dir=1############################ 内部主题设置############################## 组元数据内部主题__consumer_offsets"和__transaction_state"的复制因子# 对于开发测试以外的任何事情,建议使用大于 1 的值以确保可用性,例如 3.offsets.topic.replication.factor=1transaction.state.log.replication.factor=1事务.state.log.min.isr=1# 由于年龄有资格删除的日志文件的最小年龄log.retention.hours=168# 日志段文件的最大大小.当达到此大小时,将创建一个新的日志段.log.segment.bytes=1073741824# 检查日志段的时间间隔,看是否可以根据# 保留政策log.retention.check.interval.ms=300000# Zookeeper 连接字符串(详见zookeeper 文档).# 这是一个逗号分隔的host:port对,每个对应一个zk# 服务器.例如127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# 您还可以在 url 后附加一个可选的 chroot 字符串来指定# 所有 kafka znode 的根目录.zookeeper.connect=本地主机:2181# 连接zookeeper的超时时间(毫秒)zookeeper.connection.timeout.ms=6000#################### Confluent Proactive Support ###################### 如果设置为 true,并且安装了 confluent-support-metrics 包# 然后是收集和报告支持指标的功能#(指标")已启用.如果设置为 false,则禁用该功能.#confluent.support.metrics.enable=true############################ 组协调员设置############################## 以下配置指定了 GroupCoordinator 延迟初始消费者重新平衡的时间,以毫秒为单位.# 当新成员加入组时,重新平衡将进一步延迟 group.initial.rebalance.delay.ms 的值,最多可达 max.poll.interval.ms.# 默认值为 3 秒.# 我们在这里将其覆盖为 0,因为它可以为开发和测试提供更好的开箱即用体验.# 但是,在生产环境中,默认值 3 秒更合适,因为这将有助于避免在应用程序启动期间进行不必要且可能代价高昂的重新平衡.group.initial.rebalance.delay.ms=0# 将收集支持指标的客户 ID 和#报道.## 当客户 ID 设置为匿名"(默认)时,则只有一个# 正在收集和报告减少的指标集.## 汇合客户# -------------------# 如果你是 Confluent 客户,那么你应该替换默认的# value 与您的实际 Confluent 客户 ID.这样做将确保# 将收集和报告额外的支持指标.#confluent.support.customer.id=匿名

<块引用>

consumer.properties

# Zookeeper 连接字符串# 逗号分隔的主机:端口对,每个对应一个 zk# 服务器.例如127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"zookeeper.connect=127.0.0.1:2181# 连接zookeeper的超时时间(毫秒)zookeeper.connection.timeout.ms=6000#消费者组IDgroup.id=test-consumer-group,inventory_consumers#消费者超时#consumer.timeout.ms=5000

<块引用>

kafka-rest.properties

id=kafka-rest-test-serverschema.registry.url=http://localhost:8081zookeeper.connect=本地主机:2181## 配置拦截器类,用于将消费者和生产者指标发送到 Confluent 控制中心# 确保monitoring-interceptors-<version>.jar在Java类路径上consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptorproducer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor

<块引用>

pom.xml

<modelVersion>4.0.0</modelVersion><groupId>com.psl.kafka.spring</groupId><artifactId>spring-kafka-webhook-service</artifactId><包装>战争</包装><name>spring-kafka-webhook-service</name><description>Spring Kafka Webhook 服务</description><父母><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>1.5.9.RELEASE</version><相对路径/><!-- 从存储库中查找父级 --></父母><存储库><存储库><id>汇合</id><url>http://packages.confluent.io/maven/</url></repository></repositories><属性><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding><java.version>1.8</java.version><org.springframework-version>5.0.0.RELEASE</org.springframework-version><org.springframework.security-version>4.0.1.RELEASE</org.springframework.security-version><org.aspectj-version>1.8.11</org.aspectj-version><org.slf4j-version>1.7.12</org.slf4j-version></属性><依赖项><依赖><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></依赖><依赖><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-tomcat</artifactId><范围>提供</范围></依赖><依赖><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-test</artifactId><范围>测试</范围></依赖><依赖><groupId>org.springframework.kafka</groupId><artifactId>spring-kafka</artifactId><version>1.1.1.RELEASE</version></依赖><依赖><groupId>org.springframework.kafka</groupId><artifactId>spring-kafka-test</artifactId><version>1.1.1.RELEASE</version></依赖><依赖><groupId>log4j</groupId><artifactId>log4j</artifactId><version>1.2.16</version><scope>运行时</scope></依赖><依赖><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-databind</artifactId><version>2.9.2</version><排除事项><排除><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-core</artifactId></排除><排除><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-annotations</artifactId></排除></排除项></依赖><依赖><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-core</artifactId><version>2.9.2</version></依赖><依赖><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-annotations</artifactId><version>2.9.2</version></依赖><依赖><groupId>org.json</groupId><artifactId>json</artifactId><version>20160810</version></依赖><依赖><groupId>com.google.code.gson</groupId><artifactId>gson</artifactId><version>2.8.0</version></依赖><依赖><groupId>org.springframework.integration</groupId><artifactId>spring-integration-kafka</artifactId><version>2.1.0.RELEASE</version></依赖><依赖><groupId>org.apache.kafka</groupId><artifactId>kafka-clients</artifactId><version>0.10.0.1</version></依赖><依赖><groupId>io.confluent</groupId><artifactId>监控拦截器</artifactId><version>3.1.1</version></依赖></依赖项><构建><插件><插件><groupId>org.springframework.boot</groupId><artifactId>spring-boot-maven-plugin</artifactId></插件></plugins></build><version>0.0.1-SNAPSHOT</version></项目>

我的 Receiver 和 Sender 类没有使用任何注解,例如 @Component@Service.有什么区别吗?

@Configuration公共类 InventorySenderConfig@配置@EnableKafka公共类 InventoryReceiverConfig@成分公共类 KafkaConfig@配置公共类生产频道配置@配置公共类 ConsumingChannelConfig@RestController公共类 KafkaWebhookController@Service("webhookService")公共类 KafkaServiceImpl@启用集成@SpringBootApplication@ComponentScan("com.psl.kafka")公共类 SpringKafkaWebhookServiceApplication 扩展了 SpringBootServletInitializer

这些是我的类注释.他们看起来没问题还是我需要改变一些东西?

<块引用>

kafka 版本更新到 0.10.1.1 后出现新的构建错误

2017-12-26 13:11:44.490 INFO 13444 --- [main] o.a.kafka.common.utils.AppInfoParser:Kafka 版本:0.10.1.12017-12-26 13:11:44.490 INFO 13444 --- [主要] o.a.kafka.common.utils.AppInfoParser:Kafka commitId:f10ef2720b03b2472017-12-26 13:12:44.499 ERROR 13444 --- [main] o.s.k.support.LoggingProducerListener :发送带有 key='inventory-events' 和 payload='Hello Spring Integration Kafka 0!' 的消息时抛出异常主题清单:org.apache.kafka.common.errors.TimeoutException:60000 毫秒后无法更新元数据.2017-12-26 13:12:44.501 WARN 13444 --- [main] o.a.k.c.p.i.ProducerInterceptors:执行拦截器 onAcknowledgement 回调时出错java.lang.IllegalStateException: clusterResource 未定义在 io.confluent.shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:174) ~[monitoring-interceptors-3.1.1.jar:na]在 io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor.onAcknowledgement(MonitoringProducerInterceptor.java:59) ~[monitoring-interceptors-3.1.1.jar:na]在 org.apache.kafka.clients.producer.internals.ProducerInterceptors.onSendError(ProducerInterceptors.java:116) ~[kafka-clients-0.10.1.1.jar:na]在 org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:489) [kafka-clients-0.10.1.1.jar:na]在 org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436) [kafka-clients-0.10.1.1.jar:na]

我是否需要定义在 ProducingChannelConfig、ConsumingChannelConfig 以及 InventoryReceiverConfig 类中添加为配置的任何拦截器类?

解决方案

引起:java.lang.ClassNotFoundException:org.apache.kafka.common.ClusterResourceListener

您的类路径中缺少 kafka-clients jar.你用什么进行依赖管理?Maven 和 gradle 应该会自动将这个 jar 放在类路径上.

I am manually starting Zookeeper, then Kafka server and finally the Kafka-Rest server with their respective properties file. Next, I am deploying my Spring Boot application on tomcat

In the Tomcat log trace, I am getting the Error org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer and my application is failing to startup

Error Log

25-Dec-2017 15:00:32.508 SEVERE [localhost-startStop-1] org.apache.catalina.core.ContainerBase.addChildInternal ContainerBase.addChild: start:
 org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/spring-kafka-webhook-service-0.0.1-SNAPSHOT]]
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167)
        at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:752)
        at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:728)
        at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
        at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:986)
        at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1857)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
        at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178)
        at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:50)
        at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:348)
        at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:151)
        at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:114)
        at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:880)
        at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.finishRefresh(EmbeddedWebApplicationContext.java:144)
        at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:546)
        at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.refresh(EmbeddedWebApplicationContext.java:122)
        at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:693)
        at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:360)
        at org.springframework.boot.SpringApplication.run(SpringApplication.java:303)
        at org.springframework.boot.web.support.SpringBootServletInitializer.run(SpringBootServletInitializer.java:154)
        at org.springframework.boot.web.support.SpringBootServletInitializer.createRootApplicationContext(SpringBootServletInitializer.java:134)
        at org.springframework.boot.web.support.SpringBootServletInitializer.onStartup(SpringBootServletInitializer.java:87)
        at org.springframework.web.SpringServletContainerInitializer.onStartup(SpringServletContainerInitializer.java:169)
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5196)
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
        ... 10 more
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
        at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:702)
        at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:557)
        at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:73)
        at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:69)
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:305)
        at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:230)
        at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:180)
        at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:202)
        at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:126)
        at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:202)
        at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:287)
        at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:236)
        at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:175)
        ... 27 more
Caused by: java.lang.NoClassDefFoundError: org/apache/kafka/common/ClusterResourceListener
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(WebappClassLoaderBase.java:2283)
        at org.apache.catalina.loader.WebappClassLoaderBase.findClass(WebappClassLoaderBase.java:811)
        at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1260)
        at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1119)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:332)
        at org.apache.kafka.common.config.AbstractConfig.getConfiguredInstances(AbstractConfig.java:225)
        at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:643)
        ... 39 more
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.common.ClusterResourceListener
        at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1291)
        at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1119)
        ... 51 more

Receiver class

public class InventoryEventReceiver {

    private static final Logger log = LoggerFactory.getLogger(InventoryEventReceiver.class);

    private CountDownLatch latch = new CountDownLatch(1);

    public CountDownLatch getLatch() {
        return latch;
    }

    @KafkaListener(topics="inventory", containerFactory="kafkaListenerContainerFactory")
    public void listenWithHeaders(
            @Payload InventoryEvent event,
            @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
            @Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) Integer key,
            @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
            @Header(KafkaHeaders.OFFSET) String offset
            ) {

        System.out.println("EVENT HAS BEEN RECEIVED by listenWithHeaders(InventoryEvent)");
        System.out.println(event.toString());


        log.info(System.currentTimeMillis() + "-- Received Event :\"" + event + "\" from partition:offset -- " + partition + ":" + offset +
                " for topic : " + topic);       

        String urlForInventoryListeners = "http://localhost:8080/" + topic + "/listeners";
        OutputStream os = null;
        try {
            URL objectUrl = new URL(urlForInventoryListeners);
            HttpURLConnection con = (HttpURLConnection) objectUrl.openConnection();
            con.setRequestMethod("POST");
            con.setRequestProperty("Content-Type", "application/json; charset=UTF-8");
            con.setRequestProperty("topic", topic);
            Gson gson = new Gson();
            String eventJson = gson.toJson(event);
            con.setDoOutput(true);
            os = con.getOutputStream();
            os.write(eventJson.getBytes("UTF-8"));
            System.out.println("Event sent to " + objectUrl);

        } catch (Exception e) {
            e.printStackTrace();
            System.out.println(e.getMessage());
        } finally {
            try {
                os.close();
            } catch (IOException e) {
                e.printStackTrace();
                System.out.println(e.getMessage());
            }
        }

        latch.countDown();
    }

}

Receiver config class

@Configuration
@EnableKafka
public class InventoryReceiverConfig {

    @Autowired
    private KafkaConfig kafkaConfig;

    @Bean
    public static ConsumerFactory<String, InventoryEvent> consumerFactory() { 
        return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(), 
                new JsonDeserializer<>(InventoryEvent.class));
    }

    @Bean
    public static ConcurrentKafkaListenerContainerFactory<String, InventoryEvent> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, InventoryEvent> containerFactory = new ConcurrentKafkaListenerContainerFactory<>();
        containerFactory.setConsumerFactory(consumerFactory());
        containerFactory.setConcurrency(3); 
        containerFactory.getContainerProperties().setPollTimeout(3000);
        return containerFactory;
    }

    @Bean
    public static Map<String, Object> consumerConfigs() {
        Map<String, Object> consumerProps = new HashMap<>();
        consumerProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG,"inventory_consumers");
        consumerProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class);
        consumerProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,JsonDeserializer.class);
        consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor");
        return consumerProps;
    }

    @Bean
    public InventoryEventReceiver receiver() {
        return new InventoryEventReceiver();
    }

}

And my cluster properties file for server.properties, consumer.properties and kafka-rest.properties are as follows:

server.properties

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://localhost:9092
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
##################### Confluent Proactive Support ######################
# If set to true, and confluent-support-metrics package is installed
# then the feature to collect and report support metrics
# ("Metrics") is enabled.  If set to false, the feature is disabled.
#
confluent.support.metrics.enable=true
############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0


# The customer ID under which support metrics will be collected and
# reported.
#
# When the customer ID is set to "anonymous" (the default), then only a
# reduced set of metrics is being collected and reported.
#
# Confluent customers
# -------------------
# If you are a Confluent customer, then you should replace the default
# value with your actual Confluent customer ID.  Doing so will ensure
# that additional support metrics will be collected and reported.
#
confluent.support.customer.id=anonymous

consumer.properties

# Zookeeper connection string
# comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002"
zookeeper.connect=127.0.0.1:2181

# timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

#consumer group id
group.id=test-consumer-group,inventory_consumers

#consumer timeout
#consumer.timeout.ms=5000

kafka-rest.properties

id=kafka-rest-test-server
schema.registry.url=http://localhost:8081
zookeeper.connect=localhost:2181
#
# Configure interceptor classes for sending consumer and producer metrics to Confluent Control Center
# Make sure that monitoring-interceptors-<version>.jar is on the Java class path
consumer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor
producer.interceptor.classes=io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.psl.kafka.spring</groupId>
    <artifactId>spring-kafka-webhook-service</artifactId>
    <packaging>war</packaging>

    <name>spring-kafka-webhook-service</name>
    <description>Spring Kafka Webhook Service</description>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.9.RELEASE</version>
        <relativePath /> <!-- lookup parent from repository -->
    </parent>

    <repositories>
        <repository>
            <id>confluent</id>
            <url>http://packages.confluent.io/maven/</url>
        </repository>
    </repositories>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
        <org.springframework-version>5.0.0.RELEASE</org.springframework-version>
        <org.springframework.security-version>4.0.1.RELEASE</org.springframework.security-version>
        <org.aspectj-version>1.8.11</org.aspectj-version>
        <org.slf4j-version>1.7.12</org.slf4j-version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>1.1.1.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka-test</artifactId>
            <version>1.1.1.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.16</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-databind</artifactId>
            <version>2.9.2</version>
            <exclusions>
                <exclusion>
                    <groupId>com.fasterxml.jackson.core</groupId>
                    <artifactId>jackson-core</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>com.fasterxml.jackson.core</groupId>
                    <artifactId>jackson-annotations</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-core</artifactId>
            <version>2.9.2</version>
        </dependency>
        <dependency>
            <groupId>com.fasterxml.jackson.core</groupId>
            <artifactId>jackson-annotations</artifactId>
            <version>2.9.2</version>
        </dependency>
        <dependency>
            <groupId>org.json</groupId>
            <artifactId>json</artifactId>
            <version>20160810</version>
        </dependency>
        <dependency>
            <groupId>com.google.code.gson</groupId>
            <artifactId>gson</artifactId>
            <version>2.8.0</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.integration</groupId>
            <artifactId>spring-integration-kafka</artifactId>
            <version>2.1.0.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.10.0.1</version>
        </dependency>
        <dependency>
            <groupId>io.confluent</groupId>
            <artifactId>monitoring-interceptors</artifactId>
            <version>3.1.1</version>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
        </plugins>
    </build>


    <version>0.0.1-SNAPSHOT</version>
</project>

My Receiver and Sender classes are not annotated with any annotations such as @Component or @Service. Does it make any difference?

@Configuration 
public class InventorySenderConfig

@Configuration 
@EnableKafka 
public class InventoryReceiverConfig

@Component 
public class KafkaConfig

@Configuration 
public class ProducingChannelConfig

@Configuration 
public class ConsumingChannelConfig

@RestController 
public class KafkaWebhookController

@Service("webhookService") 
public class KafkaServiceImpl

@EnableIntegration 
@SpringBootApplication 
@ComponentScan("com.psl.kafka") 
public class SpringKafkaWebhookServiceApplication extends SpringBootServletInitializer

These are my class annotations. Do they look to be OK or I need to change something ?

New Build Error after kafka version update to 0.10.1.1

2017-12-26 13:11:44.490  INFO 13444 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version : 0.10.1.1
2017-12-26 13:11:44.490  INFO 13444 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId : f10ef2720b03b247
2017-12-26 13:12:44.499 ERROR 13444 --- [           main] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='inventory-events' and payload='Hello Spring Integration Kafka 0!' to topic inventory:

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

2017-12-26 13:12:44.501  WARN 13444 --- [           main] o.a.k.c.p.i.ProducerInterceptors         : Error executing interceptor onAcknowledgement callback

java.lang.IllegalStateException: clusterResource is not defined
    at io.confluent.shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:174) ~[monitoring-interceptors-3.1.1.jar:na]
    at io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor.onAcknowledgement(MonitoringProducerInterceptor.java:59) ~[monitoring-interceptors-3.1.1.jar:na]
    at org.apache.kafka.clients.producer.internals.ProducerInterceptors.onSendError(ProducerInterceptors.java:116) ~[kafka-clients-0.10.1.1.jar:na]
    at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:489) [kafka-clients-0.10.1.1.jar:na]
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436) [kafka-clients-0.10.1.1.jar:na]

Do I need to define any Interceptor classes that I have added as config in ProducingChannelConfig, ConsumingChannelConfig as well as InventoryReceiverConfig classes ?

解决方案

Caused by: java.lang.ClassNotFoundException: org.apache.kafka.common.ClusterResourceListener

You are missing the kafka-clients jar from your class path. What are you using for dependency management? Maven and gradle should put this jar on the class path for you automatically.

这篇关于org.apache.kafka.common.KafkaException:无法构建 kafka 消费者的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆