iOS4:如何将视频文件用作 OpenGL 纹理? [英] iOS4: how do I use video file as an OpenGL texture?

查看:13
本文介绍了iOS4:如何将视频文件用作 OpenGL 纹理?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在 OpenGL 中将视频文件的内容(假设暂时没有音频)显示到 UV 映射的 3D 对象上.我在 OpenGL 中做了很多,但不知道从哪里开始处理视频文件,而且大多数示例似乎都是从相机获取视频帧,这不是我所追求的.

I'm trying to display the contents of a video file (let's just say without the audio for now) onto a UV mapped 3D object in OpenGL. I've done a fair bit in OpenGL but have no idea where to begin in video file handling, and most of the examples out there seems to be for getting video frames from cameras, which is not what I'm after.

目前我觉得如果我可以将视频的单个帧设置为 CGImageRef 我会被设置,所以我想知道如何做到这一点?也许还有更好的方法来做到这一点?我应该从哪里开始,在 iOS 上播放视频最直接的文件格式是什么?.mov?

At the moment I feel if I can get individual frames of the video as CGImageRef I'd be set, so I'm wondering how to do this? Perhaps there are even be better ways to do this? Where should I start and what's the most straight forward file format for video playback on iOS? .mov?

推荐答案

道歉;在 iPhone 上打字,所以我会简短一些.

Apologies; typing on an iPhone so I'll be a little brief.

使用您的视频 URL 创建一个 AVURLAsset - 如果您愿意,可以是本地文件 URL.QuickTime 可以做的任何事情都很好,所以 H.264 中的 MOV 或 M4V 可能是最好的来源.

Create an AVURLAsset with the URL of your video - which can be a local file URL if you like. Anything QuickTime can do is fine, so MOV or M4V in H.264 is probably the best source.

查询 AVMediaTypeVideo 类型轨道的资产.除非您的源视频有多个类似的摄像机角度,否则您应该只得到一个,因此只需获取 objectAtIndex:0 就可以为您提供所需的 AVAssetTrack.

Query the asset for tracks of type AVMediaTypeVideo. You should get just one unless your source video has multiple camera angles of something like that, so just taking objectAtIndex:0 should give you the AVAssetTrack you want.

使用它来创建一个 AVAssetReaderTrackOutput.可能您想指定 kCVPixelFormatType_32BGRA.

Use that to create an AVAssetReaderTrackOutput. Probably you want to specify kCVPixelFormatType_32BGRA.

使用资产创建一个 AVAssetReader;附加资产阅读器轨道输出作为输出.并调用 startReading.

Create an AVAssetReader using the asset; attach the asset reader track output as an output. And call startReading.

此后,您可以在轨道输出上调用 copyNextSampleBuffer 以获得新的 CMSampleBuffers,使您处于与从相机获取输入相同的位置.因此,您可以锁定它以获取像素内容,并通过 Apple 的 BGRA 扩展将这些内容推送到 OpenGL.

Henceforth you can call copyNextSampleBuffer on the track output to get new CMSampleBuffers, putting you in the same position as if you were taking input from the camera. So you can lock that to get at pixel contents and push those to OpenGL via Apple's BGRA extension.

这篇关于iOS4:如何将视频文件用作 OpenGL 纹理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆