kafka connect - jdbc sink sql 异常 [英] kafka connect - jdbc sink sql exception

查看:43
本文介绍了kafka connect - jdbc sink sql 异常的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用 confluent 社区版进行简单的设置,其中包括一个调用 Kafka 休息代理的休息客户端,然后使用提供的 jdbc 接收器连接器将该数据推送到 oracle 数据库中.

I am using the confluent community edition for a simple setup consisting a rest client calling the Kafka rest proxy and then pushing that data into an oracle database using the provided jdbc sink connector.

我注意到,如果有一个 sql 异常,例如如果实际数据的长度大于实际数据的长度(定义的列长度),则任务停止,如果我重新启动它,同样的事情它会尝试插入错误的条目它停止了.它不会插入其他条目.

I noticed that if there is an sql exception for instance if the actual data's length is greater than the actual one (column's length defined), the task stopped and if I do restart it, same thing it tries to insert the erroneous entry and it stopped. It does not insert the other entries.

难道不是我可以记录错误条目并让任务继续插入其他数据的方法吗?

Is not a way I can log the erroneous entry and let the tasks continue inserting the other data?

推荐答案

Sink Connectors 的 Kafka Connect 框架只能在以下期间抛出异常时跳过有问题的记录:- 转换键或值(Converter:toConnectData(...))- 转换 (Transformation::apply)

Kafka Connect framework for Sink Connectors can only skip problematic records when exception is thrown during: - Convertion key or values (Converter:toConnectData(...)) - Transformation (Transformation::apply)

为此,您可以使用 errors.tolerance 属性:

For that you can use errors.tolerance property:

"errors.tolerance": "all"

还有一些附加属性,用于打印有关错误的详细信息:errors.log.enableerrors.log.include.messages.原始答案:Apache Kafka JDBC 连接器 - SerializationException:未知魔法字节

There are some additional properties, for printing details regarding errors: errors.log.enable, errors.log.include.messages. Original answer: Apache Kafka JDBC Connector - SerializationException: Unknown magic byte

如果在传递消息期间抛出异常,则 Sink 任务将被终止.如果您需要处理与外部系统的通信错误(或其他错误),则必须为连接器添加支持

If an exception is thrown during delivering messages Sink Task is killed. If you need to handle communication error (or others) with an external system, you have to add support to your connector

Jdbc 连接器,当 SQLException 被抛出时会重试但不跳过任何记录

Jdbc Connector, when SQLException is thrown makes retries but doesn't skip any records

重试次数和它们之间的间隔由以下属性管理

Number of retries and interval between them is managed by the following properties

  • max.retries 默认值 10
  • retry.backoff.ms 默认 3000
  • max.retries default value 10
  • retry.backoff.ms default 3000

这篇关于kafka connect - jdbc sink sql 异常的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆