如何连续将嗅探到的数据包提供给kafka? [英] How to continuously feed sniffed packets to kafka?

查看:279
本文介绍了如何连续将嗅探到的数据包提供给kafka?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前,我正在从本地wlan接口中嗅探数据包,例如:

Currently I am sniffing packets from my local wlan interface like :

sudo tshark > sampleData.pcap

但是,我需要将此数据提供给kafka.

However, I need to feed this data to kafka.

当前,我有一个kafka生产者脚本producer.sh:

Currently, I have a kafka producer script producer.sh:

../bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 'spark-kafka'

像这样将数据提供给kafka:

and feed data to kafka like this:

producer.sh < sampleData.pcap

sampleData.pcap 中,我已经预先捕获了IP数据包信息.

where in sampleData.pcap I have pre-captured IP packet information.

但是,我想自动化该过程,就像这样:

However, I wanna automate the process where it'd be something like this:

sudo tshark > http://localhost:9091
producer.sh < http://localhost:9091

这显然只是一个伪算法.我想做的是,将嗅探数据发送到端口,并让kafka连续读取它.我不希望kafka连续读取文件,因为那将意味着从单个文件进行大量读/写操作,从而导致效率低下.

This is obviously just a pseudoalgorithm. What I want to do is, send the sniffing data to a port and have kafka continuously read it. I don't want kafka to read from a file continuously because that'd mean tremendous amount of read/write operations from a single file causing inefficiency.

我搜索了互联网,并发现了 kafka-connect ,但找不到任何有用的文档来实现这样的功能.

I searched the internet and came across kafka-connect but I can't find any useful documentation for implementing something like this.

实现这样的最佳方法是什么?

What's the best way to implement something like this?

谢谢!

推荐答案

使用netcat

无需编写服务器,您可以使用netcat(并告诉您的脚本在标准输入上进行监听):

No need to write a server, you can use netcat (and tell your script to listen on the standard input):

shell1> nc -l 8888 | ./producer.sh
shell2> sudo tshark -l | nc 127.1 8888

tshark的-l防止它对输出进行过多缓冲(每个数据包之后都有刷新).

The -l of tshark prevents it from buffering the output too much (flushes after each packet).

使用命名管道

您还可以使用命名管道将tshark输出传输到第二个进程:

You could also use a named pipe to transmit tshark output to your second process:

shell1> mkfifo /tmp/tsharkpipe
shell1> tail -f -c +0 /tmp/tsharkpipe | ./producer.sh
shell2> sudo tshark -l > /tmp/tsharkpipe

这篇关于如何连续将嗅探到的数据包提供给kafka?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆