将连接器与安装了Helm的Kafka/Confluent一起使用 [英] Using a connector with Helm-installed Kafka/Confluent
问题描述
我已通过使用Helm图表在本地Minikube上安装了Kafka https://github.com/confluentinc/cp-helm-charts 遵循以下说明 https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html 像这样:
I have installed Kafka on a local Minikube by using the Helm charts https://github.com/confluentinc/cp-helm-charts following these instructions https://docs.confluent.io/current/installation/installing_cp/cp-helm-charts/docs/index.html like so:
helm install -f kafka_config.yaml confluentinc/cp-helm-charts --name kafka-home-delivery --namespace cust360
kafka_config.yaml与默认的yaml几乎相同,唯一的例外是我将其缩减为1个服务器/代理,而不是3个(只是因为我试图节省本地minikube上的资源;希望这是与我的问题无关).
The kafka_config.yaml is almost identical to the default yaml, with the one exception being that I scaled it down to 1 server/broker instead of 3 (just because I'm trying to conserve resources on my local minikube; hopefully that's not relevant to my problem).
在Minikube上运行的也是一个MySQL实例.这是kubectl get pods --namespace myNamespace
的输出:
Also running on Minikube is a MySQL instance. Here's the output of kubectl get pods --namespace myNamespace
:
我想使用其中一种连接器(例如例如Debezium MySQL CDC ).在说明中说:
I want to connect MySQL and Kafka, using one of the connectors (like Debezium MySQL CDC, for instance). In the instructions, it says:
安装连接器
使用Confluent Hub客户端安装此程序 连接器:
Use the Confluent Hub client to install this connector with:
confluent-hub install debezium/debezium-connector-mysql:0.9.2
声音不错,但以下情况除外:1)我不知道要在哪个Pod上运行此命令,2)所有Pod似乎都没有可用的confluent-hub命令.
Sounds good, except 1) I don't know which pod to run this command on, 2) None of the pods seem to have a confluent-hub command available.
问题:
- 不通过这些Helm图表安装汇流式集线器吗?
- 我必须自己安装confluent-hub吗?
- 如果是这样,我必须在哪个吊舱上安装它?
推荐答案
理想情况下,它应该可以作为helm
脚本的一部分进行配置,但不幸的是,目前还没有.解决此问题的一种方法是从Confluent的Kafka Connect Docker映像构建一个新的Docker.手动下载连接器,然后将内容提取到文件夹中.将其内容复制到容器中的路径.像下面这样.
Ideally this should be configurable as part of the helm
script, but unfortunately it is not as of now. One way to work around this is to build a new Docker from Confluent's Kafka Connect Docker image. Download the connector manually and extract the contents into a folder. Copy the contents of this to a path in the container. Something like below.
Dockerfile的内容
Contents of Dockerfile
FROM confluentinc/cp-kafka-connect:5.2.1
COPY <connector-directory> /usr/share/java
/usr/share/java
是Kafka Connect查找插件的默认位置.您还可以在安装helm
的过程中使用其他位置并提供新的位置(plugin.path
).
/usr/share/java
is the default location where Kafka Connect looks for plugins. You could also use different location and provide the new location (plugin.path
) during your helm
installation.
构建此映像并将其托管在可访问的位置.在helm
安装期间,您还必须提供/覆盖图像和标签的详细信息.
Build this image and host it somewhere accessible. You will also have to provide/override the image and tag details during the helm
installation.
此处是values.yaml
文件的路径.您可以在此处找到image
和plugin.path
值.
Here is the path to the values.yaml
file. You can find the image
and plugin.path
values here.
这篇关于将连接器与安装了Helm的Kafka/Confluent一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!