OpenCV不会加载大图像(~4GB) [英] OpenCV will not load a big image (~4GB)

查看:256
本文介绍了OpenCV不会加载大图像(~4GB)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在开发一个程序,用于从相当大的图像中检测彩色地面控制点。 TIFF图像大约是3到4 GB(大约35 000 x 33 000像素)。
我使用Python 2和OpenCV进行图像处理。

I'm working on a program that is to detect colored ground control points from a rather large image. The TIFF image is some 3 - 4 GB (aboud 35 000 x 33 000 pix). I am using Python 2, and OpenCV to do the image processing.

import cv2
img = 'ortho.tif'
I = cv2.imread(img, cv2.IMREAD_COLOR)

此部分不会(始终)生成错误消息。在显示图像的同时:

This part does not (always) produce an error message. While showing the image does:

cv2.imshow('image', I)

我还尝试使用matplotlib显示图像:

I have also tried showing the image by using matplotlib:

plt.imshow(I[:, :, ::-1])  # Hack to change BGR to RGB

对于大型图像,OpenCV或Python是否有任何限制?
你有什么建议加载这个iamge?

Is there any limitation on OpenCV or Python regarding large images? What would you suggest to get this iamge loaded?

PS:我做这个工作的电脑是一个Windows 10工作站(它有足够的马力处理图像。)

PS: The computer I do this work on is a Windows 10 "workstation" (It has enough horsepowers to deal with the image).

提前感谢您的帮助:)

推荐答案

<$ c $的实施c> imread()

Mat imread( const string& filename, int flags )
{
    Mat img;
    imread_( filename, flags, LOAD_MAT, &img );
    return img;
}

这将分配对应的矩阵作为连续数组加载图像。所以这取决于(至少部分)你的硬件性能:你的机器必须能够分配4 GB连续的RAM阵列(如果你在Debian发行版上,你可以通过运行检查你的RAM大小,例如, vmstat -s -SM )。

This allocates the matrix corresponding to load an image as a contiguous array. So this depends (at least partly) on your hardware performance: your machine must be able to allocate 4 GB contiguous RAM array (if you're on a Debian distro, you may check your RAM size by running, for example, vmstat -s -SM).

出于好奇,我试图获得一个连续的内存阵列(一个大的,但比你的4 GB图像需要的那个)使用 ascontiguousarray ,但在此之前,我已经偶然发现内存分配问题:

By curiosity, I tried to get a contiguous memory array (a big one, but less than the one your 4 GB image requires) using ascontiguousarray, but before that, I already stumbled on a memory allocation problem:

>>> img = numpy.zeros(shape=(35000,35000))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
MemoryError
>>>

在实践中,即使你有足够的内存,操纵像素也不是一个好主意。一个4 GB的RAM映像,您无论如何都需要将其拆分为 感兴趣的区域 ,较小的区域,可能是 频道 ,具体取决于您要执行的操作的性质像素。

In practice, even if you have enough RAM, it is not a good idea to manipulate the pixels of a 4 GB RAM image and you will need to split it anyway in terms of regions of interests, smaller areas and may be channels too, depending on the nature of the operations you want to perform on the pixels.

编辑1:

正如我在下面的评论中所说,你的回答,如果你有16GB的RAM,你可以用 scikit 那么没有理由你不能用OpenCV做同样的事情。

As I said in my comment below your answer, if you have 16GB of RAM and you're able to read that image with scikit then there is no reason you can not do the same with OpenCV.

请试一试:

import numpy as np # Do not forget to import numpy
import cv2    
img = cv2.imread('ortho.tif')

您忘记在原始代码中导入Numpy,这就是OpenCV显然无法加载图像的原因。所有OpenCV数组结构都转换为Numpy数组,并且您读取的图像由OpenCV表示为内存中的数组。

You forgot to import Numpy in your original code and that is why OpenCV obviously failed to load the image. All the OpenCV array structures are converted to-and-from Numpy arrays and the image you read are represented by OpenCV as arrays in the memory.

编辑2:

OpenCV可以处理大小高达10 GB的图像。但是当它来到 cv2.imwrite()函数时就是如此。但是,对于 cv2.imread(),要读取的图像的大小要小得多:这是2013年9月宣布的错误( Issue3258#1438 )仍然是AFAIK,未修复。

OpenCV can deal with imaes which size is up to 10 GB. But this is true when it comes cv2.imwrite() function. For cv2.imread(), however, the size of the image to read is much smaller: that is a bug announced on September 2013 (Issue3258 #1438) which is still, AFAIK, not fixed.

这篇关于OpenCV不会加载大图像(~4GB)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆