是否可以在iOS中将视频用作GL的纹理? [英] Is it possible using video as texture for GL in iOS?
问题描述
是否可以在iOS中使用视频(预渲染,用H.264压缩)作为GL的纹理?
Is it possible using video (pre-rendered, compressed with H.264) as texture for GL in iOS?
如果可能的话,该怎么做?还有播放质量/帧率或限制吗?
If possible, how to do it? And any playback quality/frame-rate or limitations?
推荐答案
从iOS 4.0开始,您可以使用AVCaptureDeviceInput
获取相机作为设备输入,并将其与您设置的任何对象连接到AVCaptureVideoDataOutput
作为代表.通过为摄像机设置32bpp BGRA格式,委托对象将以恰好适合立即传送到glTexImage2D
(或glTexSubImage2D
的完美格式)从摄像机接收每个帧(如果设备不支持不供电的话).两种纹理;我认为MBX设备属于此类).
As of iOS 4.0, you can use AVCaptureDeviceInput
to get the camera as a device input and connect it to a AVCaptureVideoDataOutput
with any object you like set as the delegate. By setting a 32bpp BGRA format for the camera, the delegate object will receive each frame from the camera in a format just perfect for handing immediately to glTexImage2D
(or glTexSubImage2D
if the device doesn't support non-power-of-two textures; I think the MBX devices fall into this category).
有一堆帧大小和帧速率选项;猜测您将不得不根据您要使用GPU的数量进行调整.我发现一个完全琐碎的场景,只有带纹理的四边形显示最新的帧,仅当新帧到达iPhone 4时才重绘,该场景能够显示该设备的最大720p 24fps提要,而不会出现任何明显的延迟.除此以外,我还没有进行更全面的基准测试,因此希望其他人可以提供建议.
There are a bunch of frame size and frame rate options; at a guess you'll have to tweak those depending on how much else you want to use the GPU for. I found that a completely trivial scene with just a textured quad showing the latest frame, redrawn only exactly when a new frame arrives on an iPhone 4, was able to display that device's maximum 720p 24fps feed without any noticeable lag. I haven't performed any more thorough benchmarking than that, so hopefully someone else can advise.
原则上,根据API,帧可以在扫描线之间返回一些内存中的填充,这意味着在将内容发布到GL之前需要对内容进行一些改组,因此您需要为此实现代码路径.实际上,纯粹凭经验来讲,似乎当前版本的iOS绝不会以这种形式返回图像,因此这实际上并不是性能问题.
In principle, per the API, frames can come back with some in-memory padding between scanlines, which would mean some shuffling of contents before posting off to GL so you do need to implement a code path for that. In practice, speaking purely empirically, it appears that the current version of iOS never returns images in that form so it isn't really a performance issue.
现在已经非常接近三年了.在此期间,苹果公司发布了iOS 5、6和7.在5中,他们引入了CVOpenGLESTexture
和CVOpenGLESTextureCache
,它们现在是将视频从捕获设备传输到OpenGL的明智方法. Apple在此处提供示例代码,RippleViewController.m
中特别有趣的部分,特别是setupAVCapture
和captureOutput:didOutputSampleBuffer:fromConnection:
—参见196-329行.可悲的是,条款和条件阻止了此处代码的重复而不附加整个项目,但分步设置为:
it's now very close to three years later. In the interim Apple has released iOS 5, 6 and 7. With 5 they introduced CVOpenGLESTexture
and CVOpenGLESTextureCache
, which are now the smart way to pipe video from a capture device into OpenGL. Apple supplies sample code here, from which the particularly interesting parts are in RippleViewController.m
, specifically its setupAVCapture
and captureOutput:didOutputSampleBuffer:fromConnection:
— see lines 196–329. Sadly the terms and conditions prevent a duplication of the code here without attaching the whole project but the step-by-step setup is:
- 创建一个
CVOpenGLESTextureCacheCreate
和一个AVCaptureSession
; - 为视频获取合适的
AVCaptureDevice
; - 使用该捕获设备创建一个
AVCaptureDeviceInput
; - 附加一个
AVCaptureVideoDataOutput
并告诉它调用您作为样本缓冲区委托.
- create a
CVOpenGLESTextureCacheCreate
and anAVCaptureSession
; - grab a suitable
AVCaptureDevice
for video; - create an
AVCaptureDeviceInput
with that capture device; - attach an
AVCaptureVideoDataOutput
and tell it to call you as a sample buffer delegate.
在收到每个样本缓冲区后:
Upon receiving each sample buffer:
- 从中获取
CVImageBufferRef
; - 使用
CVOpenGLESTextureCacheCreateTextureFromImage
从CV图像缓冲区获取Y和UVCVOpenGLESTextureRef
; - 从CV OpenGLES纹理引用中获取纹理目标和名称以进行绑定;
- 在着色器中组合亮度和色度.
- get the
CVImageBufferRef
from it; - use
CVOpenGLESTextureCacheCreateTextureFromImage
to get Y and UVCVOpenGLESTextureRef
s from the CV image buffer; - get texture targets and names from the CV OpenGLES texture refs in order to bind them;
- combine luminance and chrominance in your shader.
这篇关于是否可以在iOS中将视频用作GL的纹理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!