什么是渲染视频帧的最佳方法是什么? [英] What is the best method to render video frames?

查看:228
本文介绍了什么是渲染视频帧的最佳方法是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

什么是呈现从去codeR捆绑到我的应用程序获得的视频帧的最佳选择(FFmpeg的,等。)?

what is the best choice for rendering video frames obtained from a decoder bundled into my app (FFmpeg, etc..) ?

我自然会倾向于选择的OpenGL中提到的 <一个href="http://stackoverflow.com/questions/4676178/android-video-player-using-ndk-opengl-es-and-ffmpeg">Android视频播放器使用NDK,OpenGL ES的,和FFmpeg的 的。

I would naturally tend to choose OpenGL as mentioned in Android Video Player Using NDK, OpenGL ES, and FFmpeg.

不过的 OpenGL在Android的视频显示 ,评论指出,OpenGL是不是渲染视频的最佳方法。

But in OpenGL in Android for video display, a comment notes that OpenGL isn't the best method for rendering video.

然后呢?该jnigraphics机库?和非GL SurfaceView?

What then? The jnigraphics native library? And a non-GL SurfaceView?

请注意,我想用本机API渲染的帧,如OpenGL或jnigraphics等。但对于Java的code建立一个SurfaceView并且这样就可以了。

Please note that I would like to use a native API for rendering the frames, such as OpenGL or jnigraphics. But Java code for setting up a SurfaceView and such is ok.

PS:MediaPlayer的是这里无关紧要,我说的是解码和显示自己的框架。我不能依赖于默认的Andr​​oid codeCS。

PS: MediaPlayer is irrelevant here, I'm talking about decoding and displaying the frames by myself. I can't rely on the default Android codecs.

推荐答案

我会尝试根据我自己的经验,阐述和巩固的答案在这里。

I'm going to attempt to elaborate on and consolidate the answers here based on my own experiences.

为什么openGL的

当人们认为视频渲染使用OpenGL,的试图利用GPU做色彩空间转换和alpha混合的。

When people think of rendering video with openGL, most are attempting to exploit the GPU to do color space conversion and alpha blending.

例如转换YV12视频帧为RGB。色彩空间转换像YV12 - >的RGB要求用户个别地计算每个像素的值。想象一下,1280×720像素有多少经营这最终是一个框架。

For instance converting YV12 video frames to RGB. Color space conversions like YV12 -> RGB require that you calculate the value of each pixel individually. Imagine for a frame of 1280 x 720 pixels how many operations this ends up being.

我刚刚描述的是真的是SIMD是为 - 执行对多个数据的并行相同的操作。该GPU是一个自然选择的色彩空间的转换。

What I've just described is really what SIMD was made for - performing the same operation on multiple pieces of data in parallel. The GPU is a natural fit for color space conversion.

为什么!openGL的

缺点是,你得到的纹理数据到GPU的过程。想想看,每一个你要的加载纹理数据到内存(CPU运行)帧,然后你必须复制这个纹理数据到GPU(CPU运算)。正是这种加载/复制的可以的使用openGL比替代品慢。

The downside is the process by which you get texture data into the GPU. Consider that for each frame you have to Load the texture data into memory (CPU operation) and then you have to Copy this texture data into the GPU (CPU operation). It is this Load/Copy that can make using openGL slower than alternatives.

如果您正在播放低分辨率的视频,然后我想这是可能的,你不会看到的速度差,因为你的CPU不会瓶颈。但是,如果你尝试用高清您将很可能打这个瓶颈,注意到一个显著的性能损失。

If you are playing low resolution videos then I suppose it's possible you won't see the speed difference because your CPU won't bottleneck. However, if you try with HD you will more than likely hit this bottleneck and notice a significant performance hit.

的方式这个瓶颈已经传统合作周围是使用像素缓冲区对象(GPU分配存储器来存储纹理负载)。不幸的是GLES2不具有像素缓冲器的对象。

The way this bottleneck has been traditionally worked around is by using Pixel Buffer Objects (allocating GPU memory to store texture Loads). Unfortunately GLES2 does not have Pixel Buffer Objects.

其他选项

有关的上述原因,许多已选择使用软件解码结合像NEON颜色空间的转换可用CPU扩展。 YUV 2 RGB的NEON的实现此处存在。该通过何种方式绘制的图像,SDL VS openGL的不应该的问题为RGB,因为要复制在两种情况下相同的像素数量。

For the above reasons, many have chosen to use software-decoding combined with available CPU extensions like NEON for color space conversion. An implementation of YUV 2 RGB for NEON exists here. The means by which you draw the frames, SDL vs openGL should not matter for RGB since you are copying the same number of pixels in both cases.

您可以决定,如果你的目标设备支持NEON增强运行执行cat / proc / cpuinfo的亚行外壳和寻找NEON的功能输出。

You can determine if your target device supports NEON enhancements by running cat /proc/cpuinfo from adb shell and looking for NEON in the features output.

这篇关于什么是渲染视频帧的最佳方法是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆