以编程方式校正鱼眼失真 [英] correcting fisheye distortion programmatically

查看:30
本文介绍了以编程方式校正鱼眼失真的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

<块引用>

赏金状态更新:

,但没有找到如何反转它.

还有一篇

输出:校正后的图像(技术上也有透视校正,但这是一个单独的步骤).

你如何计算从中心到鱼眼到直线的径向距离?

我的函数存根看起来像这样:

Point Correct_fisheye(const Point& p,const Size& img) {//到极地const 点中心 = {img.width/2,img.height/2};const 点 rel = {p.x-center.x,p.y-center.y};const double theta = atan2(rel.y,rel.x);双 R = sqrt((rel.x*rel.x)+(rel.y*rel.y));//请在此处进行鱼眼不失真//... 改变 R ...//返回矩形const Point ret = Point(center.x+R*cos(theta),centre.y+R*sin(theta));fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)
",px,py,img.width,img.height,θ,R,ret.x,ret.y);返回 ret;}

或者,我可以在找到点之前以某种方式将图像从鱼眼转换为直线,但我完全被 OpenCV 文档.在 OpenCV 中是否有一种直接的方法可以做到这一点,并且它的性能是否足以对实时视频源进行处理?

解决方案

您提到的描述指出针孔相机(不会引入镜头畸变的相机)的投影由

建模

R_u = f*tan(theta)

和普通鱼眼镜头相机的投影(即扭曲)建模为

R_d = 2*f*sin(theta/2)

您已经知道 R_d 和 theta,如果您知道相机的焦距(由 f 表示),那么校正图像将相当于根据 R_d 和 theta 计算 R_u.换句话说,

R_u = f*tan(2*asin(R_d/(2*f)))

是您要查找的公式.估计焦距 f 可以通过校准相机或其他方式来解决,例如让用户提供有关图像校正程度的反馈或使用来自原始场景的知识.

为了使用 OpenCV 解决同样的问题,您必须获得相机的内在参数和镜头畸变系数.例如,请参阅学习 OpenCV 的第 11 章(不要忘记查看 更正).然后,您可以使用这样的程序(使用 OpenCV 的 Python 绑定编写)来反转镜头失真:

#!/usr/bin/python# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056导入系统导入简历定义主(argv):如果 len(argv) <10:打印用法:%s 输入文件 fx fy cx cy k1 k2 p1 p2 输出文件"% argv[0]sys.exit(-1)src = argv[1]fx、fy、cx、cy、k1、k2、p1、p2、输出 = argv[2:]内在函数 = cv.CreateMat(3, 3, cv.CV_64FC1)cv.Zero(内在)内在函数[0, 0] = float(fx)内在函数[1, 1] = float(fy)内在函数[2, 2] = 1.0内在函数[0, 2] = float(cx)内在函数[1, 2] = float(cy)dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)cv.Zero(dist_coeffs)dist_coeffs[0, 0] = float(k1)dist_coeffs[0, 1] = float(k2)dist_coeffs[0, 2] = float(p1)dist_coeffs[0, 3] = float(p2)src = cv.LoadImage(src)dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)映射 = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))# cv.Undistort2(src, dst, 内部函数, dist_coeffs)cv.SaveImage(输出,dst)如果 __name__ == '__main__':主要(sys.argv)

另请注意,OpenCV 使用的镜头失真模型与您链接到的网页中的镜头失真模型截然不同.

BOUNTY STATUS UPDATE:

I discovered how to map a linear lens, from destination coordinates to source coordinates.

How do you calculate the radial distance from the centre to go from fisheye to rectilinear?

  • 1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?

  • 2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...


Question as originally stated:

I have some points that describe positions in a picture taken with a fisheye lens.

I want to convert these points to rectilinear coordinates. I want to undistort the image.

I've found this description of how to generate a fisheye effect, but not how to reverse it.

There's also a blog post that describes how to use tools to do it; these pictures are from that:

(1) : SOURCE Original photo link

Input : Original image with fish-eye distortion to fix.

(2) : DESTINATION Original photo link

Output : Corrected image (technically also with perspective correction, but that's a separate step).

How do you calculate the radial distance from the centre to go from fisheye to rectilinear?

My function stub looks like this:

Point correct_fisheye(const Point& p,const Size& img) {
    // to polar
    const Point centre = {img.width/2,img.height/2};
    const Point rel = {p.x-centre.x,p.y-centre.y};
    const double theta = atan2(rel.y,rel.x);
    double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
    // fisheye undistortion in here please
    //... change R ...
    // back to rectangular
    const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
    fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)
",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
    return ret;
}

Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?

解决方案

The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by

R_u = f*tan(theta)

and the projection by common fisheye lens cameras (that is, distorted) is modeled by

R_d = 2*f*sin(theta/2)

You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,

R_u = f*tan(2*asin(R_d/(2*f)))

is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.

In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:

#!/usr/bin/python

# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056

import sys
import cv

def main(argv):
    if len(argv) < 10:
    print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
    sys.exit(-1)

    src = argv[1]
    fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]

    intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
    cv.Zero(intrinsics)
    intrinsics[0, 0] = float(fx)
    intrinsics[1, 1] = float(fy)
    intrinsics[2, 2] = 1.0
    intrinsics[0, 2] = float(cx)
    intrinsics[1, 2] = float(cy)

    dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
    cv.Zero(dist_coeffs)
    dist_coeffs[0, 0] = float(k1)
    dist_coeffs[0, 1] = float(k2)
    dist_coeffs[0, 2] = float(p1)
    dist_coeffs[0, 3] = float(p2)

    src = cv.LoadImage(src)
    dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
    mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
    mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
    cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
    cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS,  cv.ScalarAll(0))
    # cv.Undistort2(src, dst, intrinsics, dist_coeffs)

    cv.SaveImage(output, dst)


if __name__ == '__main__':
    main(sys.argv)

Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.

这篇关于以编程方式校正鱼眼失真的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆