使用OpenCV计算虚拟相机的单应性 [英] Compute homography for a virtual camera with opencv

查看:277
本文介绍了使用OpenCV计算虚拟相机的单应性的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个平面的图像,并且我想计算一个图像扭曲,该图像扭曲可以让我从位于3d空间中另一点的虚拟相机看到的同一平面的合成视图.

I have an image of a planar surface, and I want to compute an image warping that gives me a synthetic view of the same planar surface seen from a virtual camera located at another point in the 3d space.

因此,给定图像I1,我想计算一个图像I2,该图像代表从虚拟相机看到的图像I1.

So, given an image I1 I want to compute an image I2 that represents the image I1 seen from a virtual camera.

理论上,存在将这两个图像相关的单应性.

In theory, there exists an homography that relates these two images.

在虚拟摄像机的摄像机姿势及其内部参数矩阵的情况下,如何计算单应性?

How do I compute this homography given the camera pose of the virtual camera, as well as it's matrix of internal parameters?

我正在使用opencv的warpPerspective()函数应用此单应性并生成扭曲的图像.

I'm using opencv's warpPerspective() function to apply this homography and generate the image warped.

谢谢.

推荐答案

好,发现了这篇文章(

Ok, found this post (Opencv virtually camera rotating/translating for bird's eye view), where I found some code doing what I needed.

但是,我注意到Y的旋转有符号错误(-sin而不是sin).这是我的适用于python的解决方案.我是python的新手,对不起,如果我做的事情很丑.

However, I noticed that the rotation in Y had a sign error (-sin instead of sin) . Here's my solution adapted for python. I'm new to python, sorry if I'm doing something ugly.

import cv2
import numpy as np

rotXdeg = 90
rotYdeg = 90
rotZdeg = 90
f = 500
dist = 500

def onRotXChange(val):
    global rotXdeg
    rotXdeg = val
def onRotYChange(val):
    global rotYdeg
    rotYdeg = val
def onRotZChange(val):
    global rotZdeg
    rotZdeg = val
def onFchange(val):
    global f
    f=val
def onDistChange(val):
    global dist
    dist=val

if __name__ == '__main__':

    #Read input image, and create output image
    src = cv2.imread('/home/miquel/image.jpeg')
    dst = np.ndarray(shape=src.shape,dtype=src.dtype)

    #Create user interface with trackbars that will allow to modify the parameters of the transformation
    wndname1 = "Source:"
    wndname2 = "WarpPerspective: "
    cv2.namedWindow(wndname1, 1)
    cv2.namedWindow(wndname2, 1)
    cv2.createTrackbar("Rotation X", wndname2, rotXdeg, 180, onRotXChange)
    cv2.createTrackbar("Rotation Y", wndname2, rotYdeg, 180, onRotYChange)
    cv2.createTrackbar("Rotation Z", wndname2, rotZdeg, 180, onRotZChange)
    cv2.createTrackbar("f", wndname2, f, 2000, onFchange)
    cv2.createTrackbar("Distance", wndname2, dist, 2000, onDistChange)

    #Show original image
    cv2.imshow(wndname1, src)

    h , w = src.shape[:2]

    while True:

        rotX = (rotXdeg - 90)*np.pi/180
        rotY = (rotYdeg - 90)*np.pi/180
        rotZ = (rotZdeg - 90)*np.pi/180

        #Projection 2D -> 3D matrix
        A1= np.matrix([[1, 0, -w/2],
                       [0, 1, -h/2],
                       [0, 0, 0   ],
                       [0, 0, 1   ]])

        # Rotation matrices around the X,Y,Z axis
        RX = np.matrix([[1,           0,            0, 0],
                        [0,np.cos(rotX),-np.sin(rotX), 0],
                        [0,np.sin(rotX),np.cos(rotX) , 0],
                        [0,           0,            0, 1]])

        RY = np.matrix([[ np.cos(rotY), 0, np.sin(rotY), 0],
                        [            0, 1,            0, 0],
                        [ -np.sin(rotY), 0, np.cos(rotY), 0],
                        [            0, 0,            0, 1]])

        RZ = np.matrix([[ np.cos(rotZ), -np.sin(rotZ), 0, 0],
                        [ np.sin(rotZ), np.cos(rotZ), 0, 0],
                        [            0,            0, 1, 0],
                        [            0,            0, 0, 1]])

        #Composed rotation matrix with (RX,RY,RZ)
        R = RX * RY * RZ

        #Translation matrix on the Z axis change dist will change the height
        T = np.matrix([[1,0,0,0],
                       [0,1,0,0],
                       [0,0,1,dist],
                       [0,0,0,1]])

        #Camera Intrisecs matrix 3D -> 2D
        A2= np.matrix([[f, 0, w/2,0],
                       [0, f, h/2,0],
                       [0, 0,   1,0]])

        # Final and overall transformation matrix
        H = A2 * (T * (R * A1))

        # Apply matrix transformation
        cv2.warpPerspective(src, H, (w, h), dst, cv2.INTER_CUBIC)

        #Show the image
        cv2.imshow(wndname2, dst)
        cv2.waitKey(1)

这篇关于使用OpenCV计算虚拟相机的单应性的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆