iOS4:如何将视频文件用作OpenGL纹理? [英] iOS4: how do I use video file as an OpenGL texture?

查看:280
本文介绍了iOS4:如何将视频文件用作OpenGL纹理?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将视频文件的内容(现在只说没有音频)显示在OpenGL中的UV映射3D对象上。我在OpenGL中做了很多,但不知道在视频文件处理方面从哪里开始,大多数示例似乎是从摄像机获取视频帧,这不是我想要的。

I'm trying to display the contents of a video file (let's just say without the audio for now) onto a UV mapped 3D object in OpenGL. I've done a fair bit in OpenGL but have no idea where to begin in video file handling, and most of the examples out there seems to be for getting video frames from cameras, which is not what I'm after.

目前我觉得如果我可以将CGImageRef中的单个视频帧设置好,那么我想知道如何做到这一点?也许甚至有更好的方法来做到这一点?我应该从哪里开始,iOS上视频播放最直接的文件格式是什么? .mov?

At the moment I feel if I can get individual frames of the video as CGImageRef I'd be set, so I'm wondering how to do this? Perhaps there are even be better ways to do this? Where should I start and what's the most straight forward file format for video playback on iOS? .mov?

推荐答案

道歉;在iPhone上打字所以我会有点简短。

Apologies; typing on an iPhone so I'll be a little brief.

使用您的视频网址创建一个AVURLAsset - 如果您愿意,可以是本地文件网址。 QuickTime可以做的任何事情都可以,因此H.264中的MOV或M4V可能是最好的来源。

Create an AVURLAsset with the URL of your video - which can be a local file URL if you like. Anything QuickTime can do is fine, so MOV or M4V in H.264 is probably the best source.

查询AVMediaTypeVideo类型的曲目的资产。除非您的源视频具有类似的多个摄像机角度,否则您应该只获得一个,所以只需使用objectAtIndex:0就可以获得您想要的AVAssetTrack。

Query the asset for tracks of type AVMediaTypeVideo. You should get just one unless your source video has multiple camera angles of something like that, so just taking objectAtIndex:0 should give you the AVAssetTrack you want.

使用它来创建一个AVAssetReaderTrackOutput。可能你想指定kCVPixelFormatType_32BGRA。

Use that to create an AVAssetReaderTrackOutput. Probably you want to specify kCVPixelFormatType_32BGRA.

使用资产创建一个AVAssetReader;将资产阅读器轨道输出附加为输出。并调用startReading。

Create an AVAssetReader using the asset; attach the asset reader track output as an output. And call startReading.

此后,您可以在轨道输出上调用copyNextSampleBuffer以获取新的CMSampleBuffers,使您处于与从摄像机获取输入相同的位置。所以你可以锁定它来获取像素内容并通过Apple的BGRA扩展将它们推送到OpenGL。

Henceforth you can call copyNextSampleBuffer on the track output to get new CMSampleBuffers, putting you in the same position as if you were taking input from the camera. So you can lock that to get at pixel contents and push those to OpenGL via Apple's BGRA extension.

这篇关于iOS4:如何将视频文件用作OpenGL纹理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆