Kafka Consumer没有收到消息 [英] Kafka Consumer does not receive messages
问题描述
我是卡夫卡的新手.我在互联网上阅读了许多有关制作Kafka生产者和Kafka消费者的说明.我成功地完成了前一个工作,可以将消息发送到Kafka集群.但是,我没有完成后面的一个.请帮助我解决这个问题.我看到我的问题喜欢StackOverflow上的一些帖子,但我想更清楚地描述. 我在Virtual Box的Ubuntu服务器上运行Kafka和Zookeeper.对1个Kafka群集和1个Zookeeper群集使用最简单的配置(几乎是默认设置).
I am a newbie in Kafka. I read many instructions on the Internet to make a Kafka Producer and Kafka Consumer. I did the former successfully which can send messages to Kafka cluster. However, I did not complete with the latter one. Please kindly help me to solve this problem. I saw my problem likes some posts on StackOverflow but I want to describe more clearly. I run Kafka and Zookeeper on Ubuntu server on Virtual Box. Use the simplest configuration (almost defaults) with 1 Kafka cluster and 1 Zookeeper cluster.
1.当我将Kafka的命令行用于生产者和消费者时,例如:
1.When I use the command line of Kafka for producer and consumer, like:
* Case 1: It works. I can see the word: Hello, World on the screen
$~/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic helloKafka --partitions 1 --replication-factor 1.
$echo "Hello, World" | ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TutorialTopic > /dev/null.
$~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic TutorialTopic --from-beginning.
2.当我将Kafka的Producer和命令行用于消费者时,例如:
2.When I use the Producer and command line of Kafka for consumer, like:
* Case 2: It works. I can see the messages which sent from the Producer on the screen
$~/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic helloKafka --partitions 1 --replication-factor 1.
$~/kafka/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic TutorialTopic --from-beginning.
$java -cp target/Kafka_Producer_Program-0.0.1-SNAPSHOT.jar AddLab_Producer
3.当我使用生产者和消费者时,例如:
3.When I use the Producer and the Consumer, like:
* Case 3: Only Producer works perfectly. The Consumer runs but does not shows any messages.
$~/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic helloKafka --partitions 1 --replication-factor 1.
$java -cp target/Kafka_Producer_Program-0.0.1-SNAPSHOT.jar AddLab_Producer
$java -cp target/Kafka_Consumer_Program-0.0.1-SNAPSHOT.jar AddLab_Consumer
这是我的生产者和消费者代码.实际上,我从Kafka的一些说明网站上复制了它们.
This is my code of the Producer and Consumer. Actually, I copied them from some instructions website of Kafka.
*制片人程序
import java.util.Properties;
import java.util.concurrent.ExecutionException;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
public class AddLab_Producer {
public static void main(String args[]) throws InterruptedException, ExecutionException {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props);
boolean sync = false;
String topic = args[0];
String key = "mykey";
for (int i = 1; i <= 3; i++) {
String value = args[1] + " " + i;
ProducerRecord<String, String> producerRecord = new ProducerRecord<String, String>(topic, value);
if (sync) {
producer.send(producerRecord).get();
} else {
producer.send(producerRecord);
}
}
producer.close();
}
}
*消费程序
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class AddLab_Consumer {
public static class KafkaPartitionConsumer implements Runnable {
private int tnum ;
private KafkaStream kfs ;
public KafkaPartitionConsumer(int id, KafkaStream ks) {
tnum = id ;
kfs = ks ;
}
public void run() {
// TODO Auto-generated method stub
System.out.println("This is thread " + tnum) ;
ConsumerIterator<byte[], byte[]> it = kfs.iterator();
int i = 1 ;
while (it.hasNext()) {
System.out.println(tnum + " " + i + ": " + new String(it.next().message()));
++i ;
}
}
}
public static class MultiKafka {
public void run() {
}
}
public static void main(String[] args) {
Properties props = new Properties();
props.put("zookeeper.connect", "localhost:2181");
props.put("group.id", "mygroupid2");
props.put("zookeeper.session.timeout.ms", "413");
props.put("zookeeper.sync.time.ms", "203");
props.put("auto.commit.interval.ms", "1000");
ConsumerConfig cf = new ConsumerConfig(props) ;
ConsumerConnector consumer = Consumer.createJavaConsumerConnector(cf) ;
String topic = "mytopic" ;
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topic, new Integer(1));
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);
ExecutorService executor = Executors.newFixedThreadPool(1); ;
int threadnum = 0 ;
for(KafkaStream<byte[],byte[]> stream : streams) {
executor.execute(new KafkaPartitionConsumer(threadnum,stream));
++threadnum ;
}
}
}
*我的POM.xml文件
*My POM.xml file
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>HelloJava</groupId>
<artifactId>HelloJava</artifactId>
<version>0.0.1-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.9.0.0</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<sourceDirectory>src</sourceDirectory>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>1.4</version>
<configuration>
<createDependencyReducedPom>true</createDependencyReducedPom>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
非常感谢您的帮助.非常感谢.
I am very appreciate about your help. Thank you very much.
消费者屏幕.它似乎正在运行,但无法接收到来自Producer的任何消息
推荐答案
我遇到了与您相同的问题.经过长时间的尝试,这是答案.
I've encountered the same problem as you. After a long time try, here is the answer.
您可以选择两种类型的kafka新消费者api.
There are two types of kafka new consumer api that you can choose one.
cousumer.assign(...)
cousumer.assign(...)
consumer.subscribe(..)
consumer.subscribe(..)
并像这样使用:
// set these properites or you should run consumer first than run producer
props.put("enable.auto.commit", "false");
props.put("auto.offset.reset", "earliest");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
boolean assign = false;
if(assign) {
TopicPartition tp = new TopicPartition(topic, 0);
List<TopicPartition> tps = Arrays.asList(tp);
consumer.assign(tps);
consumer.seekToBeginning(tps);
}else {
consumer.subscribe(Arrays.asList(topic));
}
http://kafka.apache.org/documentation.html#newconsumerconfigs
如果使用旧的使用者api,则属性配置几乎相同. 如果要查看在消费者使用之前产生的消息,请记住添加以下两个代码:
If you use old consumer api, it's almost the same about properties config. Remember to add the two following code if you want to see messages produced before consumer consumes:
props.put("enable.auto.commit", "false");
props.put("auto.offset.reset", "earliest");
希望这对其他人有帮助.
Hope this will help other people.
这篇关于Kafka Consumer没有收到消息的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!