渲染视频帧的最佳方法是什么? [英] What is the best method to render video frames?

查看:20
本文介绍了渲染视频帧的最佳方法是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

渲染从捆绑到我的应用程序(FFmpeg 等)中的解码器获得的视频帧的最佳选择是什么?

what is the best choice for rendering video frames obtained from a decoder bundled into my app (FFmpeg, etc..) ?

我自然倾向于选择使用 NDK、OpenGL ES 和 FFmpeg 的 Android 视频播放器.

I would naturally tend to choose OpenGL as mentioned in Android Video Player Using NDK, OpenGL ES, and FFmpeg.

但在 Android 中用于视频显示的 OpenGL,评论指出 OpenGL 不是渲染视频的最佳方法.

But in OpenGL in Android for video display, a comment notes that OpenGL isn't the best method for rendering video.

然后呢?jnigraphics 原生库?还有一个非 GL SurfaceView?

What then? The jnigraphics native library? And a non-GL SurfaceView?

请注意,我想使用本机 API 来渲染帧,例如 OpenGL 或 jnigraphics.但是用于设置 SurfaceView 等的 Java 代码是可以的.

Please note that I would like to use a native API for rendering the frames, such as OpenGL or jnigraphics. But Java code for setting up a SurfaceView and such is ok.

PS:MediaPlayer 在这里无关紧要,我说的是自己解码和显示帧.我不能依赖默认的 Android 编解码器.

PS: MediaPlayer is irrelevant here, I'm talking about decoding and displaying the frames by myself. I can't rely on the default Android codecs.

推荐答案

我将尝试根据自己的经验在这里详细阐述和巩固答案.

I'm going to attempt to elaborate on and consolidate the answers here based on my own experiences.

为什么使用 openGL

当人们想到使用 openGL 渲染视频时,大多数都在尝试利用 GPU 进行色彩空间转换和 alpha 混合.

When people think of rendering video with openGL, most are attempting to exploit the GPU to do color space conversion and alpha blending.

例如将 YV12 视频帧转换为 RGB.像 YV12 -> RGB 这样的色彩空间转换要求您单独计算每个像素的值.想象一下,对于一个 1280 x 720 像素的帧,这最终需要进行多少次操作.

For instance converting YV12 video frames to RGB. Color space conversions like YV12 -> RGB require that you calculate the value of each pixel individually. Imagine for a frame of 1280 x 720 pixels how many operations this ends up being.

我刚刚描述的实际上是 SIMD 的用途 - 对多条数据并行执行相同的操作.GPU 非常适合色彩空间转换.

What I've just described is really what SIMD was made for - performing the same operation on multiple pieces of data in parallel. The GPU is a natural fit for color space conversion.

为什么是 !openGL

缺点是将纹理数据输入 GPU 的过程.考虑到对于每一帧,您必须将纹理数据加载到内存中(CPU 操作),然后您必须将该纹理数据复制到 GPU 中(CPU 操作).正是这种加载/复制可以使使用 openGL 比替代方案慢.

The downside is the process by which you get texture data into the GPU. Consider that for each frame you have to Load the texture data into memory (CPU operation) and then you have to Copy this texture data into the GPU (CPU operation). It is this Load/Copy that can make using openGL slower than alternatives.

如果您正在播放低分辨率视频,那么我想您可能看不到速度差异,因为您的 CPU 不会出现瓶颈.但是,如果您尝试使用 HD,您很可能会遇到此瓶颈并注意到显着的性能下降.

If you are playing low resolution videos then I suppose it's possible you won't see the speed difference because your CPU won't bottleneck. However, if you try with HD you will more than likely hit this bottleneck and notice a significant performance hit.

传统上解决这个瓶颈的方法是使用像素缓冲对象(分配 GPU 内存来存储纹理加载).不幸的是 GLES2 没有像素缓冲对象.

The way this bottleneck has been traditionally worked around is by using Pixel Buffer Objects (allocating GPU memory to store texture Loads). Unfortunately GLES2 does not have Pixel Buffer Objects.

其他选项

出于上述原因,许多人选择将软件解码与可用的 CPU 扩展(如 NEON)相结合进行色彩空间转换.NEON 的 YUV 2 RGB 实现存在于此处.绘制帧的方式(SDL 与 openGL)对于 RGB 应该无关紧要,因为在这两种情况下您都复制了相同数量的像素.

For the above reasons, many have chosen to use software-decoding combined with available CPU extensions like NEON for color space conversion. An implementation of YUV 2 RGB for NEON exists here. The means by which you draw the frames, SDL vs openGL should not matter for RGB since you are copying the same number of pixels in both cases.

您可以通过从 adb shell 运行 cat/proc/cpuinfo 并在功能输出中查找 NEON 来确定您的目标设备是否支持 NEON 增强功能.

You can determine if your target device supports NEON enhancements by running cat /proc/cpuinfo from adb shell and looking for NEON in the features output.

这篇关于渲染视频帧的最佳方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆