我可以使用 AVFoundation 将下载的视频帧流式传输到 OpenGL ES 纹理中吗? [英] Can I use AVFoundation to stream downloaded video frames into an OpenGL ES texture?
问题描述
我已经能够使用 AVFoundation 的 AVAssetReader
类将视频帧上传到 OpenGL ES 纹理中.但是,它有一个 警告与指向远程媒体的 AVURLAsset
一起使用时失败.这个失败没有很好的记录,我想知道是否有任何方法可以解决这个缺点.
I've been able to use AVFoundation's AVAssetReader
class to upload video frames into an OpenGL ES texture. It has a caveat, however, in that it fails when used with an AVURLAsset
that points to remote media. This failure isn't well documented, and I'm wondering if there's any way around the shortcoming.
推荐答案
iOS 6 中发布了一些 API,我可以使用这些 API 让这个过程变得轻而易举.它根本不使用 AVAssetReader
,而是依赖于一个名为 AVPlayerItemVideoOutput
的类.可以通过新的 -addOutput:
方法将此类的实例添加到任何 AVPlayerItem
实例.
There's some API that was released with iOS 6 that I've been able to use to make the process a breeze. It doesn't use AVAssetReader
at all, and instead relies on a class called AVPlayerItemVideoOutput
. An instance of this class can be added to any AVPlayerItem
instance via a new -addOutput:
method.
与 AVAssetReader
不同,此类适用于由远程 AVURLAsset
支持的 AVPlayerItem
,并且还具有以下优点:允许通过 -copyPixelBufferForItemTime:itemTimeForDisplay:
支持非线性播放的更复杂的播放接口(而不是 AVAssetReader
的严格限制 -copyNextSampleBuffer
方法.
Unlike the AVAssetReader
, this class will work fine for AVPlayerItem
s that are backed by a remote AVURLAsset
, and also has the benefit of allowing for a more sophisticated playback interface that supports non-linear playback via -copyPixelBufferForItemTime:itemTimeForDisplay:
(instead of of AVAssetReader
's severely limiting -copyNextSampleBuffer
method.
// Initialize the AVFoundation state
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:someUrl options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler:^{
NSError* error = nil;
AVKeyValueStatus status = [asset statusOfValueForKey:@"tracks" error:&error];
if (status == AVKeyValueStatusLoaded)
{
NSDictionary* settings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] };
AVPlayerItemVideoOutput* output = [[[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:settings] autorelease];
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:asset];
[playerItem addOutput:[self playerItemOutput]];
AVPlayer* player = [AVPlayer playerWithPlayerItem:playerItem];
// Assume some instance variable exist here. You'll need them to control the
// playback of the video (via the AVPlayer), and to copy sample buffers (via the AVPlayerItemVideoOutput).
[self setPlayer:player];
[self setPlayerItem:playerItem];
[self setOutput:output];
}
else
{
NSLog(@"%@ Failed to load the tracks.", self);
}
}];
// Now at any later point in time, you can get a pixel buffer
// that corresponds to the current AVPlayer state like this:
CVPixelBufferRef buffer = [[self output] copyPixelBufferForItemTime:[[self playerItem] currentTime] itemTimeForDisplay:nil];
获得缓冲区后,您可以根据需要将其上传到 OpenGL.我推荐使用可怕记录的 CVOpenGLESTextureCacheCreateTextureFromImage()
函数,因为您将在所有较新的设备上获得硬件加速,这比 glTexSubImage2D() 快得多代码>.请参阅 Apple 的 GLCameraRipple 和 RosyWriter 示例演示.
Once you've got your buffer, you can upload it to OpenGL however you want. I recommend the horribly documented CVOpenGLESTextureCacheCreateTextureFromImage()
function, because you'll get hardware acceleration on all the newer devices, which is much faster than glTexSubImage2D()
. See Apple's GLCameraRipple and RosyWriter demos for examples.
这篇关于我可以使用 AVFoundation 将下载的视频帧流式传输到 OpenGL ES 纹理中吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!