硬件加速 h.264 解码到 iOS 中的纹理、覆盖或类似 [英] Hardware accelerated h.264 decoding to texture, overlay or similar in iOS

查看:30
本文介绍了硬件加速 h.264 解码到 iOS 中的纹理、覆盖或类似的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以使用 iOS 硬件加速 h.264 解码 API 来解码本地(非流式传输)视频文件,然后在其上合成其他对象,并且是否受支持?

我想制作一个涉及在视频前面绘制图形对象的应用程序,并使用播放计时器将我在上面绘制的内容与视频上正在播放的内容同步.然后,根据用户的操作,更改我在顶部绘制的内容(但不是视频)

来自 DirectX、OpenGL 和 OpenGL ES for Android,我正在描绘类似将视频渲染为纹理,并使用该纹理绘制全屏四边形,然后使用其他精灵绘制其余对象;或者可能在渲染器之前编写一个中间过滤器,这样我就可以操纵各个输出帧并绘制我的东西;或者可能在视频顶部绘制一个 2D 图层.

似乎 AV Foundation 或 Core Media 可以帮助我做我正在做的事情,但在我深入研究细节之前,我想知道是否有可能做些什么我想做什么,解决问题的主要途径是什么.

请不要回答这对你来说太高级了,先试试你好世界"的答案.我知道我的东西,只是想知道我想做的事情是否可行(最重要的是,支持,所以应用程序最终不会被拒绝),然后我自己研究细节.

我不熟悉 iOS 开发,但专业为 Android 开发 DirectX、OpenGL 和 OpenGL ES.我正在考虑制作我目前拥有的 Android 应用程序的 iOS 版本,我只想知道这是否可能.如果是这样,我有足够的时间从头开始 iOS 开发,直到做我想做的事情.如果不可能,那么我现在就不会花时间研究整个平台.

因此,这是一个技术可行性问题.我不是在请求代码.我正在寻找是的,你可以这样做.只需使用 A 和 B,使用 C 渲染到 D 并用 E 绘制你的东西"或不,你不能.硬件加速解码是不适用于第三方应用程序"(这是一位朋友告诉我的).就这样,我就上路了.

我已阅读 ios 技术概述的第 32 页中的视频技术概述.它几乎说我可以使用媒体播放器来实现最简单的播放功能(不是我想要的),UIKit 用于嵌入视频,对嵌入有更多的控制,但不能控制实际的播放(不是我想要的)我正在寻找), AVFoundation 可以更好地控制播放(可能是我需要的,但我在网上找到的大多数资源都在谈论如何使用相机),或者 Core Media 可以对视频进行完全的低级控制(可能是我需要,但文档极少,甚至更缺乏资源播放时甚至比 AVFoundation).

我担心我可能会在接下来的六个月里全职学习 iOS 编程,结果发现相关 API 无法提供给第三方开发者,我想做的事情对 iTunes 商店来说是不可接受的部署.这是我的朋友告诉我的,但我似乎在应用程序开发指南中找不到任何相关内容.所以,我来这里是想问问在这方面有更多经验的人,我想做的事情是否可行.没有了.

我认为这是一个有效的高级问题,可能会被误解为 I-didn't-do-my-homework-plz-give-me-teh-codez 问题.如果我在这里的判断是错误的,请随时删除或否决这个问题.

解决方案

是的,你可以做到这一点,我认为你的问题足够具体,属于这里.你不是唯一一个想要这样做的人,而且确实需要一点点挖掘才能弄清楚你能做什么和不能做什么.

AV Foundation 允许您使用 AVAssetReader 对 H.264 视频进行硬件加速解码,此时您将收到 BGRA 格式的原始解码视频帧.这些可以使用 glTexImage2D() 或 iOS 5.0 中更高效的纹理缓存上传到纹理.从那里,您可以处理显示或从 OpenGL ES 检索帧,并使用 AVAssetWriter 对结果执行硬件加速的 H.264 编码.所有这些都使用公共 API,因此您绝不会接近会导致 App Store 拒绝的东西.

但是,您不必自行实现此功能.我的 BSD 许可开源框架 GPUImage 封装了这些操作并为您处理所有这些.您为输入的 H.264 电影创建一个 GPUImageMovie 实例,在其上附加过滤器(例如叠加混合或色度键操作),然后将这些过滤器附加到 GPUImageView 进行显示和/或附加到 GPUImageMovieWriter 以重新编码 H.264 电影来自处理后的视频.

我目前遇到的一个问题是我不遵守视频中的时间戳进行播放,因此帧的处理速度与从电影中解码的速度一样快.对于视频的过滤和重新编码,这不是问题,因为时间戳会传递到记录器,但对于直接显示到屏幕,这意味着视频可以加速多达 2-4 倍.我欢迎任何可以让您将播放速率与实际视频时间戳同步的贡献.

我目前可以在 iPhone 4 上以超过 30 FPS 的速度播放、过滤和重新编码 640x480 视频,在 ~20-25 FPS 上播放 720p 视频,而 iPhone 4S 能够以更高的速度进行 1080p 过滤和编码超过 30 帧/秒.一些更昂贵的滤镜可能会对 GPU 造成负担并稍微减慢速度,但大多数滤镜都在这些帧速率范围内运行.

如果你愿意,你可以检查 GPUImageMovie 类,看看它是如何上传到 OpenGL ES 的,但相关代码如下:

- (void)startProcessing;{NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:self.url options:inputOptions];[inputAsset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{NSError *error = nil;AVKeyValueStatus trackingStatus = [inputAsset statusOfValueForKey:@"tracks" error:&error];if (!tracksStatus == AVKeyValueStatusLoaded){返回;}reader = [AVAssetReaderassetReaderWithAsset:inputAsset error:&error];NSMutableDictionary *outputSettings = [NSMutableDictionary 字典];[outputSettings setObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (NSString*)kCVPixelBufferPixelFormatTypeKey];//也许在 iOS 5.0 上将 alwaysCopiesSampleData 设置为 NO 以加快视频解码速度AVAssetReaderTrackOutput *readerVideoTrackOutput = [AVAssetReaderTrackOutputassetReaderTrackOutputWithTrack:[[inputAsset trackingWithMediaType:AVMediaTypeVideo] objectAtIndex:0] outputSettings:outputSettings];[阅读器添加输出:阅读器视频跟踪输出];NSArray *audioTracks = [inputAsset trackWithMediaType:AVMediaTypeAudio];BOOL shouldRecordAudioTrack = (([audioTracks count] > 0) && (self.audioEncodingTarget != nil) );AVAssetReaderTrackOutput *readerAudioTrackOutput = nil;如果(应该记录AudioTrack){audioEncodingIsFinished = 否;//这可能需要扩展以处理具有多个音轨的电影AVAssetTrack* audioTrack = [audioTracks objectAtIndex:0];readerAudioTrackOutput = [AVAssetReaderTrackOutputassetReaderTrackOutputWithTrack:audioTrack outputSettings:nil];[阅读器添加输出:阅读器音频跟踪输出];}如果([阅读器开始阅读] == 否){NSLog(@"从 URL 读取文件时出错:%@", self.url);返回;}if (synchronizedMovieWriter != nil){__unsafe_unretained GPUImageMovie *weakSelf = self;[synchronizedMovieWriter setVideoInputReadyCallback:^{[weakSelf readNextVideoFrameFromOutput:readerVideoTrackOutput];}];[synchronizedMovieWriter setAudioInputReadyCallback:^{[weakSelf readNextAudioSampleFromOutput:readerAudioTrackOutput];}];[synchronizedMovieWriter enableSynchronizationCallbacks];}别的{而(reader.status == AVAssetReaderStatusReading){[self readNextVideoFrameFromOutput:readerVideoTrackOutput];if ( (shouldRecordAudioTrack) && (!audioEncodingIsFinished) ){[self readNextAudioSampleFromOutput:readerAudioTrackOutput];}}if (reader.status == AVAssetWriterStatusCompleted) {[自我结束处理];}}}];}- (void)readNextVideoFrameFromOutput:(AVAssetReaderTrackOutput *)readerVideoTrackOutput;{if (reader.status == AVAssetReaderStatusReading){CMSampleBufferRef sampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];if (sampleBufferRef){runOnMainQueueWithoutDeadlocking(^{[自我处理MovieFrame:sampleBufferRef];});CMSampleBufferInvalidate(sampleBufferRef);CFRelease(sampleBufferRef);}别的{videoEncodingIsFinished = 是;[自我结束处理];}}else if (synchronizedMovieWriter != nil){if (reader.status == AVAssetWriterStatusCompleted){[自我结束处理];}}}- (void)processMovieFrame:(CMSampleBufferRef)movieSampleBuffer;{CMTime currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(movieSampleBuffer);CVImageBufferRef movieFrame = CMSampleBufferGetImageBuffer(movieSampleBuffer);int bufferHeight = CVPixelBufferGetHeight(movieFrame);int bufferWidth = CVPixelBufferGetWidth(movieFrame);CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent();if ([GPUImageOpenGLESContext 支持FastTextureUpload]){CVPixelBufferLockBaseAddress(movieFrame, 0);[GPUImageOpenGLESContext useImageProcessingContext];CVOpenGLESTextureRef 纹理 = NULL;CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, movieFrame, NULL, GL_TEXTURE_2D, GL_RGBA, bufferWidth, bufferHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);如果(!纹理 || 错误){NSLog(@"Movie CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);返回;}outputTexture = CVOpenGLESTextureGetName(纹理);//glBindTexture(CVOpenGLESTextureGetTarget(texture), outputTexture);glBindTexture(GL_TEXTURE_2D, outputTexture);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);for (id<GPUImageInput> 目标中的 currentTarget){NSInteger indexOfObject = [目标 indexOfObject:currentTarget];NSInteger targetTextureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];[currentTarget setInputSize:CGSizeMake(bufferWidth, bufferHeight) atIndex:targetTextureIndex];[currentTarget setInputTexture:outputTexture atIndex:targetTextureIndex];[currentTarget newFrameReadyAtTime:currentSampleTime];}CVPixelBufferUnlockBaseAddress(movieFrame, 0);//刷新 CVOpenGLESTexture 缓存并释放纹理CVOpenGLESTextureCacheFlush(coreVideoTextureCache, 0);CFR 释放(纹理);输出纹理 = 0;}别的{//上传到纹理CVPixelBufferLockBaseAddress(movieFrame, 0);glBindTexture(GL_TEXTURE_2D, outputTexture);//使用 BGRA 扩展直接拉入视频帧数据glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(movieFrame));CGSize currentSize = CGSizeMake(bufferWidth, bufferHeight);for (id<GPUImageInput> 目标中的 currentTarget){NSInteger indexOfObject = [目标 indexOfObject:currentTarget];NSInteger targetTextureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];[currentTarget setInputSize:currentSize atIndex:targetTextureIndex];[currentTarget newFrameReadyAtTime:currentSampleTime];}CVPixelBufferUnlockBaseAddress(movieFrame, 0);}如果(_runBenchmark){CFAbsoluteTime currentFrameTime = (CFAbsoluteTimeGetCurrent() - startTime);NSLog(@"当前帧时间:%f ms", 1000.0 * currentFrameTime);}}

Is it possible, and supported, to use the iOS hardware accelerated h.264 decoding API to decode a local (not streamed) video file, and then compose other objects on top of it?

I would like to make an application that involves drawing graphical objects in front of a video, and use the playback timer to synchronize what I am drawing on top, to what is being played on the video. Then, based on the user's actions, change what I am drawing on top (but not the video)

Coming from DirectX, OpenGL and OpenGL ES for Android, I am picturing something like rendering the video to a texture, and using that texture to draw a full screen quad, then use other sprites to draw the rest of the objects; or maybe writing an intermediate filter just before the renderer, so I can manipulate the individual output frames and draw my stuff; or maybe drawing to a 2D layer on top of the video.

It seems like AV Foundation, or Core Media may help me do what I am doing, but before I dig into the details, I would like to know if it is possible at all to do what I want to do, and what are my main routes to approach the problem.

Please refrain from "this is too advanced for you, try hello world first" answers. I know my stuff, and just want to know if what I want to do is possible (and most importantly, supported, so the app won't get eventually rejected), before I study the details by myself.

edit:

I am not knowledgeable in iOS development, but professionally do DirectX, OpenGL and OpenGL ES for Android. I am considering making an iOS version of an Android application I currently have, and I just want to know if this is possible. If so, I have enough time to start iOS development from scratch, up to doing what I want to do. If it is not possible, then I will just not invest time studying the entire platform at this time.

Therefore, this is a technical feasibility question. I am not requesting code. I am looking for answers of the type "Yes, you can do that. Just use A and B, use C to render into D and draw your stuff with E", or "No, you can't. The hardware accelerated decoding is not available for third-party applications" (which is what a friend told me). Just this, and I'll be on my way.

I have read the overview for the video technologies in page 32 of the ios technology overview. It pretty much says that I can use Media Player for the most simple playback functionality (not what I'm looking for), UIKit for embedding videos with a little more control over the embedding, but not over the actual playback (not what I'm looking for), AVFoundation for more control over playback (maybe what I need, but most of the resources I find online talk about how to use the camera), or Core Media to have full low-level control over video (probably what I need, but extremely poorly documented, and even more lacking in resources on playback than even AVFoundation).

I am concerned that I may dedicate the next six months to learn iOS programming full time, only to find at the end that the relevant API is not available for third party developers, and what I want to do is unacceptable for iTunes store deployment. This is what my friend told me, but I can't seem to find anything relevant in the app development guidelines. Therefore, I came here to ask people who have more experience in this area, whether or not what I want to do is possible. No more.

I consider this a valid high level question, which can be misunderstood as an I-didn't-do-my-homework-plz-give-me-teh-codez question. If my judgement in here was mistaken, feel free to delete, or downvote this question to your heart's contempt.

解决方案

Yes, you can do this, and I think your question was specific enough to belong here. You're not the only one who has wanted to do this, and it does take a little digging to figure out what you can and can't do.

AV Foundation lets you do hardware-accelerated decoding of H.264 videos using an AVAssetReader, at which point you're handed the raw decoded frames of video in BGRA format. These can be uploaded to a texture using either glTexImage2D() or the more efficient texture caches in iOS 5.0. From there, you can process for display or retrieve the frames from OpenGL ES and use an AVAssetWriter to perform hardware-accelerated H.264 encoding of the result. All of this uses public APIs, so at no point do you get anywhere near something that would lead to a rejection from the App Store.

However, you don't have to roll your own implementation of this. My BSD-licensed open source framework GPUImage encapsulates these operations and handles all of this for you. You create a GPUImageMovie instance for your input H.264 movie, attach filters onto it (such as overlay blends or chroma keying operations), and then attach these filters to a GPUImageView for display and/or a GPUImageMovieWriter to re-encode an H.264 movie from the processed video.

The one issue I currently have is that I don't obey the timestamps in the video for playback, so frames are processed as quickly as they are decoded from the movie. For filtering and re-encoding of a video, this isn't a problem, because the timestamps are passed through to the recorder, but for direct display to the screen this means that the video can be sped up by as much as 2-4X. I'd welcome any contributions that would let you synchronize the playback rate to the actual video timestamps.

I can currently play back, filter, and re-encode 640x480 video at well over 30 FPS on an iPhone 4 and 720p video at ~20-25 FPS, with the iPhone 4S being capable of 1080p filtering and encoding at significantly higher than 30 FPS. Some of the more expensive filters can tax the GPU and slow this down a bit, but most filters operate in these framerate ranges.

If you want, you can examine the GPUImageMovie class to see how it does this uploading to OpenGL ES, but the relevant code is as follows:

- (void)startProcessing;
{
    NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
    AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:self.url options:inputOptions];

    [inputAsset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{
        NSError *error = nil;
        AVKeyValueStatus tracksStatus = [inputAsset statusOfValueForKey:@"tracks" error:&error];
        if (!tracksStatus == AVKeyValueStatusLoaded) 
        {
            return;
        }
        reader = [AVAssetReader assetReaderWithAsset:inputAsset error:&error];

        NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary];
        [outputSettings setObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]  forKey: (NSString*)kCVPixelBufferPixelFormatTypeKey];
        // Maybe set alwaysCopiesSampleData to NO on iOS 5.0 for faster video decoding
        AVAssetReaderTrackOutput *readerVideoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[inputAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] outputSettings:outputSettings];
        [reader addOutput:readerVideoTrackOutput];

        NSArray *audioTracks = [inputAsset tracksWithMediaType:AVMediaTypeAudio];
        BOOL shouldRecordAudioTrack = (([audioTracks count] > 0) && (self.audioEncodingTarget != nil) );
        AVAssetReaderTrackOutput *readerAudioTrackOutput = nil;

        if (shouldRecordAudioTrack)
        {            
            audioEncodingIsFinished = NO;

            // This might need to be extended to handle movies with more than one audio track
            AVAssetTrack* audioTrack = [audioTracks objectAtIndex:0];
            readerAudioTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:nil];
            [reader addOutput:readerAudioTrackOutput];
        }

        if ([reader startReading] == NO) 
        {
            NSLog(@"Error reading from file at URL: %@", self.url);
            return;
        }

        if (synchronizedMovieWriter != nil)
        {
            __unsafe_unretained GPUImageMovie *weakSelf = self;

            [synchronizedMovieWriter setVideoInputReadyCallback:^{
                [weakSelf readNextVideoFrameFromOutput:readerVideoTrackOutput];
            }];

            [synchronizedMovieWriter setAudioInputReadyCallback:^{
                [weakSelf readNextAudioSampleFromOutput:readerAudioTrackOutput];
            }];

            [synchronizedMovieWriter enableSynchronizationCallbacks];
        }
        else
        {
            while (reader.status == AVAssetReaderStatusReading) 
            {
                [self readNextVideoFrameFromOutput:readerVideoTrackOutput];

                if ( (shouldRecordAudioTrack) && (!audioEncodingIsFinished) )
                {
                    [self readNextAudioSampleFromOutput:readerAudioTrackOutput];
                }

            }            

            if (reader.status == AVAssetWriterStatusCompleted) {
                [self endProcessing];
            }
        }
    }];
}

- (void)readNextVideoFrameFromOutput:(AVAssetReaderTrackOutput *)readerVideoTrackOutput;
{
    if (reader.status == AVAssetReaderStatusReading)
    {
        CMSampleBufferRef sampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];
        if (sampleBufferRef) 
        {
            runOnMainQueueWithoutDeadlocking(^{
                [self processMovieFrame:sampleBufferRef]; 
            });

            CMSampleBufferInvalidate(sampleBufferRef);
            CFRelease(sampleBufferRef);
        }
        else
        {
            videoEncodingIsFinished = YES;
            [self endProcessing];
        }
    }
    else if (synchronizedMovieWriter != nil)
    {
        if (reader.status == AVAssetWriterStatusCompleted) 
        {
            [self endProcessing];
        }
    }
}

- (void)processMovieFrame:(CMSampleBufferRef)movieSampleBuffer; 
{
    CMTime currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(movieSampleBuffer);
    CVImageBufferRef movieFrame = CMSampleBufferGetImageBuffer(movieSampleBuffer);

    int bufferHeight = CVPixelBufferGetHeight(movieFrame);
    int bufferWidth = CVPixelBufferGetWidth(movieFrame);

    CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent();

    if ([GPUImageOpenGLESContext supportsFastTextureUpload])
    {
        CVPixelBufferLockBaseAddress(movieFrame, 0);

        [GPUImageOpenGLESContext useImageProcessingContext];
        CVOpenGLESTextureRef texture = NULL;
        CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, movieFrame, NULL, GL_TEXTURE_2D, GL_RGBA, bufferWidth, bufferHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);

        if (!texture || err) {
            NSLog(@"Movie CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);  
            return;
        }

        outputTexture = CVOpenGLESTextureGetName(texture);
        //        glBindTexture(CVOpenGLESTextureGetTarget(texture), outputTexture);
        glBindTexture(GL_TEXTURE_2D, outputTexture);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

        for (id<GPUImageInput> currentTarget in targets)
        {            
            NSInteger indexOfObject = [targets indexOfObject:currentTarget];
            NSInteger targetTextureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];

            [currentTarget setInputSize:CGSizeMake(bufferWidth, bufferHeight) atIndex:targetTextureIndex];
            [currentTarget setInputTexture:outputTexture atIndex:targetTextureIndex];

            [currentTarget newFrameReadyAtTime:currentSampleTime];
        }

        CVPixelBufferUnlockBaseAddress(movieFrame, 0);

        // Flush the CVOpenGLESTexture cache and release the texture
        CVOpenGLESTextureCacheFlush(coreVideoTextureCache, 0);
        CFRelease(texture);
        outputTexture = 0;        
    }
    else
    {
        // Upload to texture
        CVPixelBufferLockBaseAddress(movieFrame, 0);

        glBindTexture(GL_TEXTURE_2D, outputTexture);
        // Using BGRA extension to pull in video frame data directly
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(movieFrame));

        CGSize currentSize = CGSizeMake(bufferWidth, bufferHeight);
        for (id<GPUImageInput> currentTarget in targets)
        {
            NSInteger indexOfObject = [targets indexOfObject:currentTarget];
            NSInteger targetTextureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];

            [currentTarget setInputSize:currentSize atIndex:targetTextureIndex];
            [currentTarget newFrameReadyAtTime:currentSampleTime];
        }
        CVPixelBufferUnlockBaseAddress(movieFrame, 0);
    }

    if (_runBenchmark)
    {
        CFAbsoluteTime currentFrameTime = (CFAbsoluteTimeGetCurrent() - startTime);
        NSLog(@"Current frame time : %f ms", 1000.0 * currentFrameTime);
    }
}

这篇关于硬件加速 h.264 解码到 iOS 中的纹理、覆盖或类似的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆