如何不断地将嗅探到的数据包提供给 kafka? [英] How to continuously feed sniffed packets to kafka?

查看:45
本文介绍了如何不断地将嗅探到的数据包提供给 kafka?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前我正在从本地 wlan 接口嗅探数据包,例如:

Currently I am sniffing packets from my local wlan interface like :

sudo tshark >sampleData.pcap

但是,我需要将这些数据提供给 kafka.

However, I need to feed this data to kafka.

目前,我有一个 kafka 生产者脚本 producer.sh:

Currently, I have a kafka producer script producer.sh:

../bin/kafka-console-producer.sh --broker-list localhost:9092 --topic 'spark-kafka'

并像这样向 kafka 提供数据:

and feed data to kafka like this:

producer.sh

sampleData.pcap 中我预先捕获了 IP 数据包信息.

where in sampleData.pcap I have pre-captured IP packet information.

但是,我想自动化这个过程,就像这样:

However, I wanna automate the process where it'd be something like this:

sudo tshark > http://localhost:9091
producer.sh < http://localhost:9091

这显然只是一个伪算法.我想要做的是,将嗅探数据发送到一个端口并让 kafka 连续读取它.我不希望 kafka 连续读取文件,因为这意味着从单个文件进行大量读/写操作会导致效率低下.

This is obviously just a pseudoalgorithm. What I want to do is, send the sniffing data to a port and have kafka continuously read it. I don't want kafka to read from a file continuously because that'd mean tremendous amount of read/write operations from a single file causing inefficiency.

我在互联网上搜索并遇到了 kafka-connect,但我找不到任何有用的文档来实现这样的事情.

I searched the internet and came across kafka-connect but I can't find any useful documentation for implementing something like this.

实现这样的事情的最佳方法是什么?

What's the best way to implement something like this?

谢谢!

推荐答案

With netcat

无需编写服务器,您可以使用 netcat(并告诉您的脚本监听标准输入):

No need to write a server, you can use netcat (and tell your script to listen on the standard input):

shell1> nc -l 8888 | ./producer.sh
shell2> sudo tshark -l | nc 127.1 8888

tshark 的 -l 防止它过多地缓冲输出(在每个数据包后刷新).

The -l of tshark prevents it from buffering the output too much (flushes after each packet).

使用命名管道

您还可以使用命名管道将 tshark 输出传输到您的第二个进程:

You could also use a named pipe to transmit tshark output to your second process:

shell1> mkfifo /tmp/tsharkpipe
shell1> tail -f -c +0 /tmp/tsharkpipe | ./producer.sh
shell2> sudo tshark -l > /tmp/tsharkpipe

这篇关于如何不断地将嗅探到的数据包提供给 kafka?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆