kafka connect-jdbc接收器sql异常 [英] kafka connect - jdbc sink sql exception
问题描述
我正在使用融合社区版进行简单的设置,其中包括调用Kafka rest代理的rest客户端,然后使用提供的jdbc接收器连接器将数据推送到oracle数据库中。
I am using the confluent community edition for a simple setup consisting a rest client calling the Kafka rest proxy and then pushing that data into an oracle database using the provided jdbc sink connector.
我注意到,如果存在一个sql异常(例如,如果实际数据的长度大于实际数据的长度(已定义列的长度)),则任务停止,并且如果我这样做重新启动它,尝试插入错误的条目并停止它。它不会插入其他条目。
I noticed that if there is an sql exception for instance if the actual data's length is greater than the actual one (column's length defined), the task stopped and if I do restart it, same thing it tries to insert the erroneous entry and it stopped. It does not insert the other entries.
不是一种可以记录错误条目并让任务继续插入其他数据的方法吗?
Is not a way I can log the erroneous entry and let the tasks continue inserting the other data?
推荐答案
Sink连接器的Kafka Connect框架仅在以下情况下引发异常时才跳过有问题的记录:
-转换键或值( Converter:toConnectData(... )
)
-转换( Transformation :: apply
)
Kafka Connect framework for Sink Connectors can only skip problematic records when exception is thrown during:
- Convertion key or values (Converter:toConnectData(...)
)
- Transformation (Transformation::apply
)
您可以使用errors.tolerance属性:
For that you can use errors.tolerance property:
"errors.tolerance": "all"
还有一些其他属性,用于打印有关错误的详细信息: errors.log.enable
, errors.log.include.messages
。
原始答案: Apache Kafka JDBC连接器- SerializationException:未知的魔术字节
There are some additional properties, for printing details regarding errors: errors.log.enable
, errors.log.include.messages
.
Original answer: Apache Kafka JDBC Connector - SerializationException: Unknown magic byte
如果在传递消息期间抛出异常,则Sink任务将被杀死。
如果您需要处理与外部系统的通信错误(或其他),则必须向连接器添加支持
If an exception is thrown during delivering messages Sink Task is killed. If you need to handle communication error (or others) with an external system, you have to add support to your connector
Jdbc连接器,当 SQLException
引发重试,但不跳过任何记录
Jdbc Connector, when SQLException
is thrown makes retries but doesn't skip any records
重试次数及其之间的间隔由以下属性管理
Number of retries and interval between them is managed by the following properties
-
最大重试次数
默认值10 -
retry.backoff.ms
默认3000
max.retries
default value 10retry.backoff.ms
default 3000
这篇关于kafka connect-jdbc接收器sql异常的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!