将视频直接解码为单独线程中的纹理 [英] Decoding video directly into a texture in separate thread

查看:76
本文介绍了将视频直接解码为单独线程中的纹理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以使用ffmpeg功能将视频直接异步解码为纹理?我需要将视频输出到几何体上.

Is it possible to decode video using ffmpeg capabilities directly into texture asynchroniously? I need to output the video onto a geometry.

mpv个视频播放器,可以将视频直接输出到帧缓冲区中并使用其他接近金属的功能,但是有没有最简单的示例,适用于嵌入式设备(OpenGL ES 2.0或3.0)?

There is mpv video player, which can output video directly into framebuffer and uses other close to metal features, but is there minimalistic example, which suitable for embedded devices (OpenGL ES 2.0 or 3.0)?

如果纹理在整个帧时间内都不会离开GPU内存,那就太好了.

It would be nice if texture won't leave GPU memory during whole the frame time.

推荐答案

我目前使用sws_scale修剪mpegts流帧的边缘,因为某些帧在解码时会在边缘使用16甚至32个额外像素.对于大多数用途而言,这不是必需的.相反,我使用它直接复制到我自己的缓冲区中.

I currently use sws_scale to trim the edges off mpegts streams frames as some frames will have 16 or even 32 extra pixels at the edge used when decoding. This isn't necessary for most uses. Instead I use it to copy direct into my own buffers.

ff->scale_context = sws_getContext(wid, hgt, ff->vid_ctx->pix_fmt,  // usually YUV420
                               wid, hgt, AV_PIX_FMT_YUV420P,        // trim edges and copy
                               SWS_FAST_BILINEAR, NULL, NULL, NULL);

// setup my buffer to copy the frame into

uint8_t *data[] = { vframe->yframe, vframe->uframe, vframe->vframe };
int linesize[4] = { vid_ctx->width, vid_ctx->width / 2, vid_ctx->width / 2, 0 };

int ret = sws_scale(scale_context,
          (const uint8_t **)frame->data, frame->linesize,
          0, vid_ctx->height,
          data, linesize);

如果帧使用其他格式,则需要进行调整.

You will need to adjust if the frames are in another format.

使用了用于openGL ES的GPU着色器,可节省大量开销:

The GPU shader for openGL ES used which saves on a lot of overhead:

// YUV shader (converts YUV planes to RGB on the fly)

static char vertexYUV[] = "attribute vec4 qt_Vertex; \
attribute vec2 qt_InUVCoords; \
varying vec2 qt_TexCoord0; \
 \
void main(void) \
{ \
    gl_Position = qt_Vertex; \
    gl_Position.z = 0.0;\
    qt_TexCoord0 = qt_InUVCoords; \
} \
";

static char fragmentYUV[] = "precision mediump float; \
uniform sampler2D qt_TextureY; \
uniform sampler2D qt_TextureU; \
uniform sampler2D qt_TextureV; \
varying vec2 qt_TexCoord0; \
void main(void) \
{ \
    float y = texture2D(qt_TextureY, qt_TexCoord0).r; \
    float u = texture2D(qt_TextureU, qt_TexCoord0).r - 0.5; \
    float v = texture2D(qt_TextureV, qt_TexCoord0).r - 0.5; \
    gl_FragColor = vec4( y + 1.403 * v, \
                         y - 0.344 * u - 0.714 * v, \
                         y + 1.770 * u, 1.0); \
}";

如果使用NV12格式而不是YUV420,则UV帧是交错的,您只需使用"r,g"或"x,y"取值即可.

If using NV12 format instead of YUV420 then the UV frames are interleaved and you just fetch the values using either "r, g" or "x, y" which ever swizzle you use.

缓冲区中的每个YUV帧都上传到"qt_TextureY,U和V".

Each YUV frame from your buffer is uploaded to "qt_TextureY, U and V".

如评论中所述,FFMpegs版本将自动使用硬件解码.

As mentioned in the comments, FFMpegs build will automatically use HW decoding.

此外,为了节省CPU开销,我将所有解码流都分成了各自的线程.

Also, to shave on CPU overhead I separate all decoding streams into their own threads.

祝你好运.还有什么,问一下即可.

Good luck. Anything else, just ask.

这篇关于将视频直接解码为单独线程中的纹理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆