dataframereader 如何读取http? [英] How can dataframereader read http?

查看:22
本文介绍了dataframereader 如何读取http?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我的开发环境:

  • 智能
  • 马文
  • Scala2.10.6
  • win7 x64

依赖关系:

 <dependencies>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.10 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.10</artifactId>
        <version>2.2.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-mllib_2.10 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-mllib_2.10</artifactId>
        <version>2.2.0</version>
        <scope>provided</scope>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql_2.10 -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.10</artifactId>
        <version>2.2.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.scala-lang/scala-library -->
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-library</artifactId>
        <version>2.10.6</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.scala-lang/scala-reflect -->
    <dependency>
        <groupId>org.scala-lang</groupId>
        <artifactId>scala-reflect</artifactId>
        <version>2.10.6</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-common</artifactId>
        <version>2.7.4</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs -->
    <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-hdfs</artifactId>
        <version>2.7.4</version>
    </dependency>
</dependencies>

问题:
我想将远程 csv 文件读入数据帧.
我尝试了下一个:

problem :
I want read remote csv file into dataframe.
I tried next:

val weburl = "http://myurl.com/file.csv"
val tfile = spark.read.option("header","true").option("inferSchema","true").csv(weburl)

它返回下一个错误:

Exception in thread "main" java.io.IOException: No FileSystem for scheme: http

我尝试了以下互联网搜索(包括 stackoverflow)

I tried next following internet searching(include stackoverflow)

val content = scala.io.Source.fromURL(weburl).mkString
val list = content.split("\n")
//...doing something to string and typecase, seperate each lows to make dataframe format.

它工作正常,但我认为加载 web 源 csv 文件更聪明.
DataframeReader 有什么办法可以读取 HTTP csv?

it works fine, but I think more smart way to loading web source csv file.
Is there any way to DataframeReader can read HTTP csv?

我认为设置 SparkContext.hadoopConfiguration 是一些关键,所以我在互联网上尝试了很多代码.但它没有用,我不知道如何设置和代码行的每个含义.

I think setting SparkContext.hadoopConfiguration is some key, so I tried many codes in internet. but it didn't work and I don't know how to set and each meaning of code lines.

接下来是我的尝试之一,但没有奏效.(访问http"时出现相同的错误消息)

Next is one of my trying and it didn't work.(same error message on accessing "http")

val sc = new SparkContext(spark_conf)
val spark = SparkSession.builder.appName("Test").getOrCreate()
val hconf = sc.hadoopConfiguration


hconf.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)
hconf.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)
hconf.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)

设置是关键吗?与否?
或者 DataframeReader 不能直接从远程源读取?我该怎么做?
我需要为 http 格式导入一些特殊的库吗?

Is setting this is key? or not?
Or DataframeReader can't read directly from remote source? than how can i do it?
I need import some special library for http format?

我想知道的事情:

有没有办法让dataframereader可以读取HTTP源?
无需使用自己的解析数据.(例如 将在线 csv 转换为数据帧 scala 的最佳方法.)
我需要阅读 CSV 格式.CSV 是正式格式.我认为更通用的方法来读取数据,如 dataframereader.csv("local file").

Is there any way to dataframereader can read HTTP source?
Without using their own parsing data. (like Best way to convert online csv to dataframe scala.)
I need to read CSV format. CSV is formal format. I think more general way to read data like dataframereader.csv("local file").

我知道这个问题的级别太低了.很抱歉我的理解水平低.

I know this question level too low. I'm sorry for my low-level of understanding.

推荐答案

据我所知,直接读取 HTTP 数据是不可能的.您可以做的最简单的事情可能是使用 SparkFiles 下载文件,但它会将数据复制到每个工作人员:

As far as I know it is not possible to read HTTP data directly. Probably the simplest thing you can do is to download file using SparkFiles, but it will duplicate data to each worker:

import org.apache.spark.SparkFiles

spark.sparkContext.addFile("http://myurl.com/file.csv")
spark.read.csv(SparkFiles.get("file.csv"))

就我个人而言,我只是预先下载文件并将其放入分布式存储中.

Personally I'd just download the file upfront and put in a distributed storage.

这篇关于dataframereader 如何读取http?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆