OpenCV cvRemap裁剪图像 [英] OpenCV cvRemap Cropping Image

查看:249
本文介绍了OpenCV cvRemap裁剪图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我是OpenCV(2.1)的新手,所以请记住这一点.

因此,我设法使用棋盘格校准方法来校准我正在使用的廉价网络摄像机(带有广角附件),以产生固有和失真系数.

然后我就可以毫不费力地将这些值反馈回去并生成图像映射,然后将其应用于视频源以更正传入的图像.

但是我遇到了一个问题.我知道当它扭曲/校正图像时,它会创建几个歪斜的部分,然后对图像进行格式化以裁剪出任何黑色区域.那么我的问题是,我可以查看完整的变形图像,包括某些具有黑色区域的区域吗?下面是一个示例,如果我的术语不正确,我会尝试用黑色的区域来表示偏斜的区域:

可以更好地表达我所谈论的区域的图像可以是在这里找到!此图像是在此帖子中发现的.

当前:cvRemap()基本上返回上面链接的图像中的黄色框,但是我希望看到整个图像,因为有一些我想摆脱的相关数据. /p>

我尝试过的操作:对图像地图应用比例转换,以使整个图像(包括拉伸部分)适合框架

        CvMat *intrinsic = (CvMat*)cvLoad( "Intrinsics.xml" );
        CvMat *distortion = (CvMat*)cvLoad( "Distortion.xml" );

        cvInitUndistortMap( intrinsic, distortion, mapx, mapy );

        cvConvertScale(mapx, mapx, 1.25, -shift_x);   // Some sort of scale conversion
        cvConvertScale(mapy, mapy, 1.25, -shift_y);   // applied to the image map

        cvRemap(distorted,undistorted,mapx,mapy);

当我认为我正确对齐了x/y偏移(猜测/检查)时,cvConvertScale某种程度上扭曲了图像图,从而使校正无用.我可能未正确理解/理解这里涉及的一些数学知识.

有人对解决这个问题有其他建议吗,或者我可能做错了什么?我也尝试过尝试编写自己的代码来解决变形问题,但只能说OpenCV已经知道如何做到这一点.

解决方案

从内存中,您需要使用 GetOptimalNewCameraMatrix(cameraMat, distCoeffs, imageSize, alpha,...) 获得新的相机矩阵.基本上输入intrinsicdistort,原始图像大小和参数alpha(以及用于保存结果矩阵的容器,请参见文档).参数alpha将实现您想要的.

我引用文档中的内容

该函数基于免费计算最佳的新相机矩阵 缩放参数.通过更改此参数,用户可以检索 仅敏感像素alpha = 0 ,如果出现以下情况,保留所有原始图像像素 角落中的重要信息alpha = 1 ,或者得到一些东西 在两者之间.当alpha> 0时,不失真结果可能会具有 对应于像素外的虚拟"像素的一些黑色像素 捕获失真的图像.原始相机矩阵,失真 系数,计算出的新相机矩阵和newImageSize 应该传递给InitUndistortRectifyMap以生成用于 重新映射.

因此对于极端示例,其中所有黑色位都显示您想要的alpha=1.

总结:

  • alpha=1调用cvGetOptimalNewCameraMatrix以获得newCameraMatrix.
  • 使用cvInitUndistortRectifymap,其中R是单位矩阵,并且newCameraMatrix设置为您刚刚计算的那个
  • 将新地图输入cvRemap.

So I am very new to OpenCV (2.1), so please keep that in mind.

So I managed to calibrate my cheap web camera that I am using (with a wide angle attachment), using the checkerboard calibration method to produce the intrinsic and distortion coefficients.

I then have no trouble feeding these values back in and producing image maps, which I then apply to a video feed to correct the incoming images.

I run into an issue however. I know when it is warping/correcting the image, it creates several skewed sections, and then formats the image to crop out any black areas. My question then is can I view the complete warped image, including some regions that have black areas? Below is an example of the black regions with skewed sections I was trying to convey if my terminology was off:

An image better conveying the regions I am talking about can be found here! This image was discovered in this post.

Currently: The cvRemap() returns basically the yellow box in the image linked above, but I want to see the whole image as there is relevant data I am looking to get out of it.

What I've tried: Applying a scale conversion to the image map to fit the complete image (including stretched parts) into frame

        CvMat *intrinsic = (CvMat*)cvLoad( "Intrinsics.xml" );
        CvMat *distortion = (CvMat*)cvLoad( "Distortion.xml" );

        cvInitUndistortMap( intrinsic, distortion, mapx, mapy );

        cvConvertScale(mapx, mapx, 1.25, -shift_x);   // Some sort of scale conversion
        cvConvertScale(mapy, mapy, 1.25, -shift_y);   // applied to the image map

        cvRemap(distorted,undistorted,mapx,mapy);

The cvConvertScale, when I think I have aligned the x/y shift correctly (guess/checking), is somehow distorting the image map making the correction useless. There might be some math involved here I am not correctly following/understanding.

Does anyone have any other suggestions to solve this problem, or what I might be doing wrong? I've also tried trying to write my own code to fix distortion issues, but lets just say OpenCV knows already how to do it well.

解决方案

From memory, you need to use InitUndistortRectifyMap(cameraMatrix,distCoeffs,R,newCameraMatrix,map1,map2), of which InitUndistortMap is a simplified version.

cvInitUndistortMap( intrinsic, distort, map1, map2 )

is equivalent to:

cvInitUndistortRectifyMap( intrinsic, distort, Identity matrix, intrinsic, 
                           map1, map2 )

The new parameters are R and newCameraMatrix. R species an additional transformation (e.g. rotation) to perform (just set it to the identity matrix).

The parameter of interest to you is newCameraMatrix. In InitUndistortMap this is the same as the original camera matrix, but you can use it to get that scaling effect you're talking about.

You get the new camera matrix with GetOptimalNewCameraMatrix(cameraMat, distCoeffs, imageSize, alpha,...). You basically feed in intrinsic, distort, your original image size, and a parameter alpha (along with containers to hold the result matrix, see documentation). The parameter alpha will achieve what you want.

I quote from the documentation:

The function computes the optimal new camera matrix based on the free scaling parameter. By varying this parameter the user may retrieve only sensible pixels alpha=0, keep all the original image pixels if there is valuable information in the corners alpha=1, or get something in between. When alpha>0, the undistortion result will likely have some black pixels corresponding to "virtual" pixels outside of the captured distorted image. The original camera matrix, distortion coefficients, the computed new camera matrix and the newImageSize should be passed to InitUndistortRectifyMap to produce the maps for Remap.

So for the extreme example with all the black bits showing you want alpha=1.

In summary:

  • call cvGetOptimalNewCameraMatrix with alpha=1 to obtain newCameraMatrix.
  • use cvInitUndistortRectifymap with R being identity matrix and newCameraMatrix set to the one you just calculated
  • feed the new maps into cvRemap.

这篇关于OpenCV cvRemap裁剪图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆