ffmpeg视频到opengl纹理 [英] ffmpeg video to opengl texture

查看:266
本文介绍了ffmpeg视频到opengl纹理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用ffmpeg从一个视频中抓取并转换为OpenGL纹理,将其放在四面体上。我已经用尽谷歌,没有找到答案,我找到答案,但没有一个似乎没有工作。

I'm trying to render frames grabbed and converted from a video using ffmpeg to an OpenGL texture to be put on a quad. I've pretty much exhausted google and not found an answer, well I've found answers but none of them seem to have worked.

基本上,我使用 avcodec_decode_video2()来解码帧,然后 sws_scale()将帧转换为RGB,然后 glTexSubImage2D()从它创建一个OpenGL纹理,但似乎无法获得任何工作。

Basically, I am using avcodec_decode_video2() to decode the frame and then sws_scale() to convert the frame to RGB and then glTexSubImage2D() to create an openGL texture from it but can't seem to get anything to work.

我已经确定目的地在SWS上下文设置中,AVFrame具有2维的功能。这是我的代码:

I've made sure the "destination" AVFrame has power of 2 dimensions in the SWS Context setup. Here is my code:

SwsContext *img_convert_ctx = sws_getContext(pCodecCtx->width,
                pCodecCtx->height, pCodecCtx->pix_fmt, 512,
                256, PIX_FMT_RGB24, SWS_BICUBIC, NULL,
                NULL, NULL);

//While still frames to read
while(av_read_frame(pFormatCtx, &packet)>=0) {
    glClear(GL_COLOR_BUFFER_BIT);

    //If the packet is from the video stream
    if(packet.stream_index == videoStream) {
        //Decode the video
        avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);

        //If we got a frame then convert it and put it into RGB buffer
        if(frameFinished) {
            printf("frame finished: %i\n", number);
            sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);

            glBindTexture(GL_TEXTURE_2D, texture);
            //gluBuild2DMipmaps(GL_TEXTURE_2D, 3, pCodecCtx->width, pCodecCtx->height, GL_RGB, GL_UNSIGNED_INT, pFrameRGB->data);
            glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, 512, 256, GL_RGB, GL_UNSIGNED_BYTE, pFrameRGB->data[0]);
            SaveFrame(pFrameRGB, pCodecCtx->width, pCodecCtx->height, number);
            number++;
        }
    }

    glColor3f(1,1,1);
    glBindTexture(GL_TEXTURE_2D, texture);
    glBegin(GL_QUADS);
        glTexCoord2f(0,1);
        glVertex3f(0,0,0);

        glTexCoord2f(1,1);
        glVertex3f(pCodecCtx->width,0,0);

        glTexCoord2f(1,0);
        glVertex3f(pCodecCtx->width, pCodecCtx->height,0);

        glTexCoord2f(0,0);
        glVertex3f(0,pCodecCtx->height,0);

    glEnd();

正如你在代码中看到的那样,我也将框架保存到.ppm文件中,确实他们实际上是渲染,他们是。

As you can see in that code, I am also saving the frames to .ppm files just to make sure they are actually rendering, which they are.

正在使用的文件是一个.wmv在854x480,这可能是问题吗?事实上我只是告诉它去512x256?

The file being used is a .wmv at 854x480, could this be the problem? The fact I'm just telling it to go 512x256?

我查看了堆栈溢出问题,但它没有帮助。

P.S. I've looked at this Stack Overflow question but it didn't help.

此外,我还有 glEnable(GL_TEXTURE_2D),并且已经通过测试加载正常的bmp。

Also, I have glEnable(GL_TEXTURE_2D) as well and have tested it by just loading in a normal bmp.

编辑

我正在获得图像在屏幕上现在,但它是一个混乱的混乱,我猜想与改变事情的功能2(在解码, swscontext gluBuild2DMipmaps ,如我的代码所示)。我通常几乎完全一样的代码如上所示,只有我已经将 glTexSubImage2D 更改为 gluBuild2DMipmaps 并更改了类型为 GL_RGBA

I'm getting an image on the screen now but it is a garbled mess, I'm guessing something to do with changing things to a power of 2 (in the decode, swscontext and gluBuild2DMipmaps as shown in my code). I'm usually nearly exactly the same code as shown above, only I've changed glTexSubImage2D to gluBuild2DMipmaps and changed the types to GL_RGBA.

这是框架的样子:

编辑

刚刚意识到我没有显示如何设置pFrameRGB的代码:

Just realised I haven't showed the code for how pFrameRGB is set up:

//Allocate video frame for 24bit RGB that we convert to.
AVFrame *pFrameRGB;
pFrameRGB = avcodec_alloc_frame();

if(pFrameRGB == NULL) {
    return -1;
}

//Allocate memory for the raw data we get when converting.
uint8_t *buffer;
int numBytes;
numBytes = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);
buffer = (uint8_t *) av_malloc(numBytes*sizeof(uint8_t));

//Associate frame with our buffer
avpicture_fill((AVPicture *) pFrameRGB, buffer, PIX_FMT_RGB24,
    pCodecCtx->width, pCodecCtx->height);

现在我已经将 PixelFormat avgpicture_get_size to PIX_FMT_RGB24 ,我已经在 SwsContext 并且将 GluBuild2DMipmaps 更改为 GL_RGB ,并获得稍微更好的图像,但看起来我仍然缺少线它仍然有点拉长:

Now that I ahve changed the PixelFormat in avgpicture_get_size to PIX_FMT_RGB24, I've done that in SwsContext as well and changed GluBuild2DMipmaps to GL_RGB and I get a slightly better image but it looks like I'm still missing lines and it's still a bit stretched:

另一个编辑

在追踪Macke的建议并传递实际对OpenGL的分辨率我得到的帧几乎适当,但仍然有点偏斜和黑色和白色,也只有6fps现在而不是110fps:

After following Macke's advice and passing the actual resolution to OpenGL I get the frames nearly proper but still a bit skewed and in black and white, also it's only getting 6fps now rather than 110fps:

PS

我有一个功能,可以在 sws_scale()之后保存框架,并且它们是c OGL的颜色和所有东西都很好,所以OGL中的某些东西使它成为B& W。

I've got a function to save the frames to image after sws_scale() and they are coming out fine as colour and everything so something in OGL is making it B&W.

最后编辑

工作!好吧,我现在有工作,基本上我没有把纹理填补到2的力量,只是使用视频的分辨率。

Working! Okay I have it working now, basically I am not padding out the texture to a power of 2 and just using the resolution the video is.

我的纹理显示在正确的glPixelStorei()中正确地幸运的猜测glPixelStorei(GL_UNPACK_ALIGNMENT,2); / p>

I got the texture showing up properly with a lucky guess at the correct glPixelStorei()

glPixelStorei(GL_UNPACK_ALIGNMENT, 2);

另外,如果其他人都有 subimage()显示像我这样的空白问题,你必须用 glTexImage2D()填充纹理,所以我在循环中使用一次,然后使用 glTexSubImage2D()之后。

Also, if anyone else has the subimage() showing blank problem like me, you have to fill the texture at least once with glTexImage2D() and so I use it once in the loop and then use glTexSubImage2D() after that.

感谢Macke和datenwolf的帮助。

Thanks Macke and datenwolf for all your help.

推荐答案



调用 glTexSubImage2D 时,纹理是否已初始化?您需要
调用 glTexImage2D (不 Sub )一个
时间来初始化纹理对象。
为数据指针使用NULL,然后OpenGL
将初始化不带
复制数据的纹理。

Is the texture initialized when you call glTexSubImage2D? You need to call glTexImage2D (not Sub) one time to initialize the texture object. Use NULL for the data pointer, OpenGL will then initialize a texture without copying data. answered

编辑

您不提供mipmap级别。那么你是否禁用了mipmap?

You're not supplying mipmaping levels. So did you disable mipmaping?

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILER, linear_interpolation ? GL_LINEAR : GL_NEAREST);

编辑2 正在颠倒的图像不会像大多数图像格式原点在左上角,而OpenGL将纹理图像的原点放在左下方。你看到的那个条带看起来像排错了。

EDIT 2 the image being upside down is no suprise as most image formats have the origin in the upper left, while OpenGL places the texture image's origin in the lower left. That banding you see there looks like wrong row stride.

编辑3

一年前我自己做了这样的事情。我给我写了一个ffmpeg的小包装,我称之为aveasy https://github.com/datenwolf/aveasy

I did this kind of stuff myself about a year ago. I wrote me a small wrapper for ffmpeg, I called it aveasy https://github.com/datenwolf/aveasy

这是一些使用aveasy将数据读入OpenGL纹理的代码:

And this is some code to put the data fetched using aveasy into OpenGL textures:

#include <stdlib.h>
#include <stdint.h>
#include <stdio.h>
#include <string.h>
#include <math.h>

#include <GL/glew.h>

#include "camera.h"
#include "aveasy.h"

#define CAM_DESIRED_WIDTH 640
#define CAM_DESIRED_HEIGHT 480

AVEasyInputContext *camera_av;
char const *camera_path = "/dev/video0";
GLuint camera_texture;

int open_camera(void)
{
    glGenTextures(1, &camera_texture);

    AVEasyInputContext *ctx;

    ctx = aveasy_input_open_v4l2(
        camera_path,
        CAM_DESIRED_WIDTH,
        CAM_DESIRED_HEIGHT,
        CODEC_ID_MJPEG,
        PIX_FMT_BGR24 );
    camera_av = ctx;

    if(!ctx) {
        return 0;
    }

    /* OpenGL-2 or later is assumed; OpenGL-2 supports NPOT textures. */
    glBindTexture(GL_TEXTURE_2D, camera_texture[i]);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexImage2D(
        GL_TEXTURE_2D,  
        0,
        GL_RGB, 
        aveasy_input_width(ctx),
        aveasy_input_height(ctx),
        0,
        GL_BGR,
        GL_UNSIGNED_BYTE,
        NULL );

    return 1;
}

void update_camera(void)
{
    glPixelStorei( GL_UNPACK_SWAP_BYTES, GL_FALSE );
    glPixelStorei( GL_UNPACK_LSB_FIRST,  GL_TRUE  );
    glPixelStorei( GL_UNPACK_ROW_LENGTH, 0 );
    glPixelStorei( GL_UNPACK_SKIP_PIXELS, 0);
    glPixelStorei( GL_UNPACK_SKIP_ROWS, 0);
    glPixelStorei( GL_UNPACK_ALIGNMENT, 1);

    AVEasyInputContext *ctx = camera_av;
    void *buffer;

    if(!ctx)
        return;

    if( !( buffer = aveasy_input_read_frame(ctx) ) )
        return;

    glBindTexture(GL_TEXTURE_2D, camera_texture);
    glTexSubImage2D(
        GL_TEXTURE_2D,
        0,
        0,
        0,
        aveasy_input_width(ctx),
        aveasy_input_height(ctx),
        GL_BGR,
        GL_UNSIGNED_BYTE,
        buffer );
}


void close_cameras(void)
{
    aveasy_input_close(camera_av);
    camera_av=0;
}

我在一个项目中使用它,它在那里工作,所以这个代码已经过测试,一些。

I'm using this in a project and it works there, so this code is tested, sort of.

这篇关于ffmpeg视频到opengl纹理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆