如何配置flink来理解Azure Data Lake文件系统? [英] how to configure flink to understand the Azure Data Lake file system?

查看:25
本文介绍了如何配置flink来理解Azure Data Lake文件系统?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用flink从Azure数据湖读取数据.但是flink无法找到Azure数据湖文件系统.如何配置flink以了解Azure Data Lake文件系统.有人可以指导我吗?

I am using flink to read the data from Azure data lake.But flink is not able to find the Azure data lake file system. how to configure flink to understand the Azure Data lake file system.Could anyone guide me in this?

推荐答案

Flink 能够连接到任何 Hadoop 兼容的文件系统(即实现 org.apache.hadoop.fs.FileSystem).请参阅此处的说明:https://ci.apache.org/projects/flink/flink-docs-release-0.8/example_connectors.html

Flink has the capability to connect to any Hadoop compatible file system (i.e that implements org.apache.hadoop.fs.FileSystem). See here for the explanation: https://ci.apache.org/projects/flink/flink-docs-release-0.8/example_connectors.html

在 core-site.xml 中,您应该添加特定于 ADLS 的配置.无论 Flink 代理在哪里运行,您还需要在类路径中使用 ADL jar.

In the core-site.xml, you should add the ADLS-specific configuration. You will also need the ADL jars in the class path whereever the Flink agents run.

除了适用于 Flink 之外,它与本博客中概述的概念基本相同.https://medium.com/azure-data-lake/connecting-your-own-hadoop-or-spark-to-azure-data-lake-store-93d426d6a5f4

It's basically the same concept as outlined in this blog, except adapted to Flink. https://medium.com/azure-data-lake/connecting-your-own-hadoop-or-spark-to-azure-data-lake-store-93d426d6a5f4

这篇关于如何配置flink来理解Azure Data Lake文件系统?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆