解码iPhone GPU上的视频帧 [英] Decode video frames on iPhone GPU

查看:125
本文介绍了解码iPhone GPU上的视频帧的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在寻找解码iPhone上本地mpeg-4视频帧的最快方法。我只是对每10帧中像素的亮度值感兴趣。我不需要在任何地方渲染视频。

I'm looking for the fastest way to decode a local mpeg-4 video's frames on the iPhone. I'm simply interested in the luminance values of the pixels in every 10th frame. I don't need to render the video anywhere.

我尝试过ffmpeg,AVAssetReader,ImageAssetGenerator,OpenCV和MPMoviePlayer,但它们都太慢了。我能获得的最快速度是〜2倍(一分钟内扫描2分钟的视频)。我想要更接近10倍的东西。

I've tried ffmpeg, AVAssetReader, ImageAssetGenerator, OpenCV, and MPMoviePlayer but they're all too slow. The fastest speed I can get is ~2x (2 minutes of video scanned in a minute). I'd like something closer to 10x.

假设我上面的尝试没有使用GPU,有没有办法通过运行的东西来实现我的目标GPU? OpenGL似乎主要用于渲染输出,但我已经看到它用作传入视频的过滤器。也许这是一个选择?

Assuming my attempts above didn't utilize the GPU, is there any way to accomplish my goal with something that does run on the GPU? OpenGL seems like it's mostly for rendering output but I have seen it used as filters for incoming video. Maybe that's an option?

提前致谢!

推荐答案

如果您愿意使用仅限iOS 5的解决方案,请参阅AVCaputureSession上2011年WWDC会议中的示例应用程序ChromaKey。

If you are willing to use an iOS 5 only solution, take a look at the sample app ChromaKey from the 2011 WWDC session on AVCaputureSession.

该演示可捕获30 FPS的视频内置摄像头并将每个帧作为纹理传递给OpenGL。然后它使用OpenGL来操作帧,并可选择将结果写入输出视频文件。

That demo captures 30 FPS of video from the built-in camera and passes each frame to OpenGL as a texture. It then uses OpenGL to manipulate the frame, and optionally writes the result out to an output video file.

代码使用一些严重的低级魔法来绑定核心视频从AVCaptureSession到OpenGL的像素缓冲区,因此它们在图形硬件中共享内存。

The code uses some serious low-level magic to bind a Core Video Pixel buffer from an AVCaptureSession to OpenGL so they share memory in the graphics hardware.

将AVCaptureSession更改为使用电影文件作为输入而非摄像机应该相当简单输入。

It should be fairly straightforward to change the AVCaptureSession to use a movie file as input rather than camera input.

您可以将会话设置为以Y / UV形式而不是RGB传送帧,其中Y分量是亮度。如果做不到这一点,编写一个将每个像素的RGB值转换为亮度值的着色器将是一件非常简单的事情。

You could probably set up the session to deliver frames in Y/UV form rather than RGB, where the Y component is luminance. Failing that, it would be a pretty simple matter to write a shader that would convert RGB values for each pixel to luminance values.

你应该能够做到这一切所有帧,而不仅仅是每10帧。

You should be able to do all this on ALL Frames, not just every 10th frame.

这篇关于解码iPhone GPU上的视频帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆