使用pyspark,在hadoop文件系统上读写二维图像 [英] using pyspark, read/write 2D images on hadoop file system

查看:1249
本文介绍了使用pyspark,在hadoop文件系统上读写二维图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我希望能够在hdfs文件系统上读取/写入图像,并利用hdfs位置。



我有一个图像集合,其中每个图像都由


  • 2D基本附加信息数组uint16

  • 存储为xml文件。



我想通过hdfs文件系统创建存档,并使用spark来分析存档。现在我正努力将数据存储在hdfs文件系统上,以便充分利用spark + hdfs结构。

据我所知,最好的方法是创建一个sequenceFile包装器。我有两个问题:


  • 创建sequenceFile包装器是最好的方法吗?

  • 有人有指向我可以用来开始的例子吗?我不能成为第一个需要通过火花阅读与hdfs上的文本文件不同的东西的人!


解决方案

我找到了一个解决方案:使用pyspark 1.2.0 binaryfile工作。它被标记为实验性的,但是我可以用正确的openCV组合来阅读tiff图像。

  import cv2 
导入numpy为np

#构建rdd并将一个元素用于测试目的
L = sc.binaryFiles('hdfs:// localhost:9000 / *。tif').take 1)

#转换为bytearray,然后转换为np数组
file_bytes = np.asarray(bytearray(L [0] [1]),dtype = np.uint8)

#使用opencv解码np字节数组
R = cv2.imdecode(file_bytes,1)

请注意pyspark的帮助:

  binaryFiles(path,minPartitions = None)

:: Experimental

从HDFS,本地文件系统(所有节点都可用)或任何Hadoop支持的文件系统URI读取二进制文件的目录作为字节数组。每个文件都被读取为单个记录并以键值对返回,其中键是每个文件的路径,该值是每个文件的内容。

注意:小文件是首选,大文件也是允许的,但可能会导致性能不佳。


I want to be able to read / write images on an hdfs file system and take advantage of the hdfs locality.

I have a collection of images where each image is composed of

  • 2D arrays of uint16
  • basic additional information stored as an xml file.

I want to create an archive over hdfs file system, and use spark for analyzing the archive. Right now I am struggling over the best way to store the data over hdfs file system in order to be able to take full advantage of spark+hdfs structure.

From what I understand, the best way would be to create a sequenceFile wrapper. I have two questions :

  • Is creating a sequenceFile wrapper the best way ?
  • Does anybody have any pointer to examples I could use to start with ? I must not be first one that needs to read something different than text file on hdfs through spark !

解决方案

I have found a solution that works : using the pyspark 1.2.0 binaryfile does the job. It is flagged as experimental, but I was able to read tiff images with a proper combination of openCV.

import cv2
import numpy as np

# build rdd and take one element for testing purpose
L = sc.binaryFiles('hdfs://localhost:9000/*.tif').take(1)

# convert to bytearray and then to np array
file_bytes = np.asarray(bytearray(L[0][1]), dtype=np.uint8)

# use opencv to decode the np bytes array 
R = cv2.imdecode(file_bytes,1)

Note the help of pyspark :

binaryFiles(path, minPartitions=None)

    :: Experimental

    Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

    Note: Small files are preferred, large file is also allowable, but may cause bad performance.

这篇关于使用pyspark,在hadoop文件系统上读写二维图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆