如何通过硬件解码在iOS上解码H.264帧? [英] How to decode a H.264 frame on iOS by hardware decoding?

查看:241
本文介绍了如何通过硬件解码在iOS上解码H.264帧?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经使用ffmpeg来解码我从ip cam收到的每一帧。简短的代码如下所示:

I have been used ffmpeg to decode every single frame that I received from my ip cam. The brief code looks like this:

-(void) decodeFrame:(unsigned char *)frameData frameSize:(int)frameSize{
   AVFrame frame;
   AVPicture picture;
   AVPacket pkt;
   AVCodecContext *context;
   pkt.data = frameData;
   pat.size = frameSize;
   avcodec_get_frame_defaults(&frame);
   avpicture_alloc(&picture, PIX_FMT_RGB24, targetWidth, targetHeight);
   avcodec_decode_video2(&context, &frame, &got_picture, &pkt);
}

代码很好,但它是软件解码。我想通过硬件解码来提高解码性能。经过大量的研究,我知道它可以通过AVFoundation框架实现。
AVAssetReader类可能会有所帮助,但我无法弄清楚下一步是什么。有人可以为我指出以下步骤吗?任何帮助将不胜感激。

The code woks fine, but it's software decoding. I want to enhance the decoding performance by hardware decoding. After lots of research, I know it may be achieved by AVFoundation framework. The AVAssetReader class may help, but I can't figure out what's the next.Could anyone points out the following steps for me? Any help would be appreciated.

推荐答案

iOS不直接向硬件解码引擎提供任何公共访问,因为硬件是总是用于解码iOS上的H.264视频。

iOS does not provide any public access directly to the hardware decode engine, because hardware is always used to decode H.264 video on iOS.

因此,会话513 为您提供了在iOS上允许逐帧解码所需的所有信息。简而言之,按照该会话:

Therefore, session 513 gives you all the information you need to allow frame-by-frame decoding on iOS. In short, per that session:


  • 从H.264基本流生成单独的网络抽象层单元(NALU)。有关如何在线完成此操作的大量信息。 VCL NALU(IDR和非IDR)包含您的视频数据,并将被送入解码器。

  • 根据AVCC格式重新打包这些NALU,删除NALU起始码并用4字节NALU长度标头替换它们。

  • 通过<$ c $从SPS和PPS NALU创建 CMVideoFormatDescriptionRef c> CMVideoFormatDescriptionCreateFromH264ParameterSets()

  • 将每个会话的NALU帧包装为 CMSampleBuffer s。

  • 使用示例缓冲区
    创建 VTDecompressionSessionRef ,并提供 VTDecompressionSessionDecodeFrame()

    • 或者,使用 AVSampleBufferDisplayLayer ,其 -enqueueSampleBuffer:方法无需创建自己的解码器。

    • Generate individual network abstraction layer units (NALUs) from your H.264 elementary stream. There is much information on how this is done online. VCL NALUs (IDR and non-IDR) contain your video data and are to be fed into the decoder.
    • Re-package those NALUs according to the "AVCC" format, removing NALU start codes and replacing them with a 4-byte NALU length header.
    • Create a CMVideoFormatDescriptionRef from your SPS and PPS NALUs via CMVideoFormatDescriptionCreateFromH264ParameterSets()
    • Package NALU frames as CMSampleBuffers per session 513.
    • Create a VTDecompressionSessionRef, and feed VTDecompressionSessionDecodeFrame() with the sample buffers
      • Alternatively, use AVSampleBufferDisplayLayer, whose -enqueueSampleBuffer: method obviates the need to create your own decoder.

      这篇关于如何通过硬件解码在iOS上解码H.264帧?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆