如何使用OpenGL模拟OpenCV的warpPerspective功能(透视变换) [英] How to use OpenGL to emulate OpenCV's warpPerspective functionality (perspective transform)

查看:471
本文介绍了如何使用OpenGL模拟OpenCV的warpPerspective功能(透视变换)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经在Python和C ++中使用OpenCV进行了图像变形,看到可口可乐徽标在我选择的角落变形了:

I've done image warping using OpenCV in Python and C++, see the Coca Cola logo warped in place in the corners I had selected:

使用以下图像:

和这个:

完整的专辑,上面有过渡图片和说明

我需要精确地做到这一点,但是在OpenGL中.我将拥有:

I need to do exactly this, but in OpenGL. I'll have:

  • 必须在其中映射变形图像的角落

  • Corners inside which I've to map the warped image

一个单应矩阵,用于映射徽标图像的变换 进入您在最终图像中看到的徽标图像(使用OpenCV的 warpPerspective),如下所示:

A homography matrix that maps the transformation of the logo image into the logo image you see inside the final image (using OpenCV's warpPerspective), something like this:

[[  2.59952324e+00,   3.33170976e-01,  -2.17014066e+02],
[  8.64133587e-01,   1.82580111e+00,  -3.20053715e+02],
[  2.78910149e-03,   4.47911310e-05,   1.00000000e+00]]

  • 主图像(此处为跑道图像)

  • Main image (the running track image here)

    重叠图像(此处为可口可乐图像)

    Overlay image (the Coca Cola image here)

    有可能吗?我已经阅读了很多书,并开始了OpenGL基础教程,但是仅凭我的能力就能做到吗? OpenGL实现会更快吗,比如大约10ms?

    Is it possible ? I've read a lot and started OpenGL basics tutorials, but can it be done from just what I have? Would the OpenGL implementation be faster, say, around ~10ms?

    我目前正在这里学习本教程: http://ogldev.atspace.co.uk/www/tutorial12/tutorial12. html 我朝着正确的方向前进吗?这里共有OpenGL新手,请耐心等待.谢谢.

    I'm currently playing with this tutorial here: http://ogldev.atspace.co.uk/www/tutorial12/tutorial12.html Am I going in the right direction? Total OpenGL newbie here, please bear. Thanks.

    推荐答案

    在尝试了此处和其他地方提出的许多解决方案后,我通过编写片段着色器来复制此问题,该片段着色器复制了"warpPerspective"的功能.

    After trying a number of solutions proposed here and elsewhere, I ended solving this by writing a fragment shader that replicates what 'warpPerspective' does.

    片段着色器代码类似于:

    The fragment shader code looks something like:

    varying highp vec2 textureCoordinate;
    
    uniform sampler2D inputImageTexture;
    
    // NOTE: you will need to pass the INVERSE of the homography matrix, as well as 
    // the width and height of your image as uniforms!
    uniform highp mat3 inverseHomographyMatrix;
    uniform highp float width;
    uniform highp float height;
    
    void main()
    {
       // Texture coordinates will run [0,1],[0,1];
       // Convert to "real world" coordinates
       highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);
    
       // Determine what 'z' is
       highp vec3 m = inverseHomographyMatrix[2] * frameCoordinate;
       highp float zed = 1.0 / (m.x + m.y + m.z);
       frameCoordinate = frameCoordinate * zed;
    
       // Determine translated x and y coordinates
       highp float xTrans = inverseHomographyMatrix[0][0] * frameCoordinate.x + inverseHomographyMatrix[0][1] * frameCoordinate.y + inverseHomographyMatrix[0][2] * frameCoordinate.z;
       highp float yTrans = inverseHomographyMatrix[1][0] * frameCoordinate.x + inverseHomographyMatrix[1][1] * frameCoordinate.y + inverseHomographyMatrix[1][2] * frameCoordinate.z;
    
       // Normalize back to [0,1],[0,1] space
       highp vec2 coords = vec2(xTrans / width, yTrans / height);
    
       // Sample the texture if we're mapping within the image, otherwise set color to black
       if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
           gl_FragColor = texture2D(inputImageTexture, coords);
       } else {
           gl_FragColor = vec4(0.0,0.0,0.0,0.0);
       }
    }
    

    请注意,我们在此处传递的单应性矩阵是反同构矩阵!!您必须将要传递给"warpPerspective"的单应性矩阵求反,否则此代码将无法工作.

    Note that the homography matrix we are passing in here is the INVERSE HOMOGRAPHY MATRIX! You have to invert the homography matrix that you would pass into 'warpPerspective'- otherwise this code will not work.

    顶点着色器除了通过坐标外什么也不做:

    The vertex shader does nothing but pass through the coordinates:

    // Vertex shader
    attribute vec4 position;
    attribute vec4 inputTextureCoordinate;
    
    varying vec2 textureCoordinate;
    
    void main() {
       // Nothing happens in the vertex shader
       textureCoordinate = inputTextureCoordinate.xy;
       gl_Position = position;
    }
    

    传入未更改的纹理坐标和位置坐标(即textureCoordinates = [(0,0),(0,1),(1,0),(1,1)]和positionCoordinates = [(-1,-1 ),(-1,1),(1,-1),(1,1)],用于三角形条),这应该可以工作!

    Pass in unaltered texture coordinates and position coordinates (i.e. textureCoordinates = [(0,0),(0,1),(1,0),(1,1)] and positionCoordinates = [(-1,-1),(-1,1),(1,-1),(1,1)], for a triangle strip), and this should work!

    这篇关于如何使用OpenGL模拟OpenCV的warpPerspective功能(透视变换)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆