Libat上的UnsatisfiedLinkError在使用Kafka Streams进行开发时会破坏DB dll [英] UnsatisfiedLinkError on Lib rocks DB dll when developing with Kafka Streams

查看:116
本文介绍了Libat上的UnsatisfiedLinkError在使用Kafka Streams进行开发时会破坏DB dll的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开发Windows机器上编写Kafka Streams应用程序。
如果我尝试使用Kafka Streams的 leftJoin 分支功能,我在执行时会收到以下错误jar应用程序:

I'm writing a Kafka Streams application on my development Windows machine. If I try to use the leftJoin and branch features of Kafka Streams I get the error below when executing the jar application:

Exception in thread "StreamThread-1" java.lang.UnsatisfiedLinkError: C:\Users\user\AppData\Local\Temp\librocksdbjni325337723194862275.dll: Can't find dependent libraries
    at java.lang.ClassLoader$NativeLibrary.load(Native Method)
    at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
    at java.lang.Runtime.load0(Runtime.java:809)
    at java.lang.System.load(System.java:1086)
    at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
    at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
    at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
    at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
    at org.rocksdb.Options.<clinit>(Options.java:22)
    at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:115)
    at org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:38)
    at org.apache.kafka.streams.state.internals.Segments.getOrCreateSegment(Segments.java:75)
    at org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:72)
    at org.apache.kafka.streams.state.internals.ChangeLoggingSegmentedBytesStore.put(ChangeLoggingSegmentedBytesStore.java:54)
    at org.apache.kafka.streams.state.internals.MeteredSegmentedBytesStore.put(MeteredSegmentedBytesStore.java:101)
    at org.apache.kafka.streams.state.internals.RocksDBWindowStore.put(RocksDBWindowStore.java:109)
    at org.apache.kafka.streams.state.internals.RocksDBWindowStore.put(RocksDBWindowStore.java:101)
    at org.apache.kafka.streams.kstream.internals.KStreamJoinWindow$KStreamJoinWindowProcessor.process(KStreamJoinWindow.java:65)
    at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:48)
    at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:134)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
    at org.apache.kafka.streams.kstream.internals.KStreamFlatMapValues$KStreamFlatMapValuesProcessor.process(KStreamFlatMapValues.java:43)
    at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:48)
    at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:134)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
    at org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
    at org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:48)
    at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
    at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:134)
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
    at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:70)
    at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:197)
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:641)
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)

好像Kafka找不到DLL ,但是等等......我正在开发一个Java应用程序!

It seems like Kafka does not find a DLL, but wait...I'm developing a Java application!

可能是什么问题?如果我尝试进行更简单的流操作,例如只有过滤器,那么为什么这个错误不显示?

What could be the problem? And why this error doesn't show off if I try to do simpler streaming operations like only a filter?

更新:

仅当代理中存在消息时才会引发此问题。我正在使用Kafka Streams版本0.10.2.1。

This problem raises only when a message is present in the broker. I'm using Kafka Streams version 0.10.2.1.

这是引发问题的代码段

public class KafkaStreamsMainClass {

    private KafkaStreamsMainClass() {
    }

    public static void main(final String[] args) throws Exception {
        Properties streamsConfiguration = new Properties();
        streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafka-streams");
        streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-server:9092");
        streamsConfiguration.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "schema-registry:8082");
        streamsConfiguration.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 10 * 1000);
        streamsConfiguration.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
        streamsConfiguration.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, GenericAvroSerde.class);
        streamsConfiguration.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, GenericAvroSerde.class);
        KStreamBuilder builder = new KStreamBuilder();
        KStream<GenericRecord, GenericRecord> sourceStream = builder.stream(SOURCE_TOPIC);

        KStream<GenericRecord, GenericRecord> finishedFiltered = sourceStream
                .filter((GenericRecord key, GenericRecord value) -> value.get("endTime") != null);

        KStream<GenericRecord, GenericRecord>[] branchedStreams = sourceStream
                .filter((GenericRecord key, GenericRecord value) -> value.get("endTime") == null)
                .branch((GenericRecord key, GenericRecord value) -> value.get("firstField") != null,
                        (GenericRecord key, GenericRecord value) -> value.get("secondField") != null);

        branchedStreams[0] = finishedFiltered.join(branchedStreams[0],
                (GenericRecord value1, GenericRecord value2) -> {
                    return value1;
                }, JoinWindows.of(TimeUnit.SECONDS.toMillis(2)));

        branchedStreams[1] = finishedFiltered.join(branchedStreams[1],
                (GenericRecord value1, GenericRecord value2) -> {
                    return value1;
                }, JoinWindows.of(TimeUnit.SECONDS.toMillis(2)));

        KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration);
        streams.setUncaughtExceptionHandler((Thread thread, Throwable throwable) -> {
            throwable.printStackTrace();
        });
        streams.start();

        Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
    }

}

我打开 rocksdbjni-5.0.1.jar 由Maven下载的存档,它包含 librocksdbjni-win64.dll 库。它似乎试图从RocksDB的外部而不是从内部检索库。

I opened the rocksdbjni-5.0.1.jar archive downloaded by Maven and it includes the librocksdbjni-win64.dll library. It seems that it is trying to retrieve the library from the outside of the RocksDB instead from the inner.

我正在Windows 7机器上开发。

I'm developing on a Windows 7 machine.

您遇到过这个问题吗?

推荐答案

我更新了我的kafka-streams项目到最新发布的1.0.0版本。

I updated my kafka-streams project to the latest released version 1.0.0.

此版本存在这个错误但是在修补它并在内部Artifactory服务器上上传这个修补版本后,我们能够在Windows和Linux上执行我们的kafka-streams代理。
下一个版本1.0.1和1.1.0将修复此错误,因此只要其中一个版本发布,我们就会切换到它们而不是修补版本。

This version suffers of this bug but after patching it and uploading this patched version on the internal Artifactory server we were able to execute our kafka-streams agent both on Windows and on Linux. The next versions 1.0.1 and 1.1.0 will have this bug fix so as soon as one of these versions will be released we will switch to them instead of the patched version.

总结一下Kafka家伙用1.0.0版本解决了这个bug。

To sum up the Kafka guys solved this bug with the 1.0.0 release.

这篇关于Libat上的UnsatisfiedLinkError在使用Kafka Streams进行开发时会破坏DB dll的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆