在iOS上通过RTMP进行H264视频流传输 [英] H264 Video Streaming over RTMP on iOS

查看:121
本文介绍了在iOS上通过RTMP进行H264视频流传输的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

通过一些挖掘,我发现了一个库,该库在编写时从.mp4文件中提取NAL单元.我正在尝试使用libavformatlibavcodec将这些信息打包成RTMP flv.我使用以下视频设置了视频流:

With a bit of digging, I have found a library that extracts NAL units from .mp4 file while it is being written. I'm attempting to packetize this information to flv over RTMP using libavformat and libavcodec. I setup a video stream using:

-(void)setupVideoStream {
    int ret = 0;
    videoCodec = avcodec_find_decoder(STREAM_VIDEO_CODEC);

    if (videoCodec == nil) {
        NSLog(@"Could not find encoder %i", STREAM_VIDEO_CODEC);
        return;
    }

    videoStream                                 = avformat_new_stream(oc, videoCodec);

    videoCodecContext                           = videoStream->codec;

    videoCodecContext->codec_type               = AVMEDIA_TYPE_VIDEO;
    videoCodecContext->codec_id                 = STREAM_VIDEO_CODEC;
    videoCodecContext->pix_fmt                  = AV_PIX_FMT_YUV420P;
    videoCodecContext->profile                  = FF_PROFILE_H264_BASELINE;

    videoCodecContext->bit_rate                 = 512000;
    videoCodecContext->bit_rate_tolerance       = 0;

    videoCodecContext->width                    = STREAM_WIDTH;
    videoCodecContext->height                   = STREAM_HEIGHT;

    videoCodecContext->time_base.den            = STREAM_TIME_BASE;
    videoCodecContext->time_base.num            = 1;
    videoCodecContext->gop_size                 = STREAM_GOP;

    videoCodecContext->has_b_frames             = 0;
    videoCodecContext->ticks_per_frame          = 2;

    videoCodecContext->qcompress                = 0.6;
    videoCodecContext->qmax                     = 51;
    videoCodecContext->qmin                     = 10;
    videoCodecContext->max_qdiff                = 4;
    videoCodecContext->i_quant_factor           = 0.71;

    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
        videoCodecContext->flags                |= CODEC_FLAG_GLOBAL_HEADER;

    videoCodecContext->extradata                = avcCHeader;
    videoCodecContext->extradata_size           = avcCHeaderSize;

    ret = avcodec_open2(videoStream->codec, videoCodec, NULL);
    if (ret < 0)
        NSLog(@"Could not open codec!");
}

然后我进行连接,并且每次库提取NALU时,它都会向我的RTMPClient返回一个包含一个或两个NALU的数组.处理实际流式传输的方法如下所示:

Then I connect, and each time the library extracts a NALU, it returns an array holding one or two NALUs to my RTMPClient. The method that handles the actual streaming looks like this:

-(void)writeNALUToStream:(NSArray*)data time:(double)pts {
    int ret = 0;
    uint8_t *buffer = NULL;
    int bufferSize = 0;

    // Number of NALUs within the data array
    int numNALUs = [data count];

    // First NALU
    NSData *fNALU = [data objectAtIndex:0];
    int fLen = [fNALU length];

    // If there is more than one NALU...
    if (numNALUs > 1) {
        // Second NALU
        NSData *sNALU = [data objectAtIndex:1];
        int sLen = [sNALU length];

        // Allocate a buffer the size of first data and second data
        buffer = av_malloc(fLen + sLen);

        // Copy the first data bytes of fLen into the buffer
        memcpy(buffer, [fNALU bytes], fLen);

        // Copy the second data bytes of sLen into the buffer + fLen + 1
        memcpy(buffer + fLen + 1, [sNALU bytes], sLen);

        // Update the size of the buffer
        bufferSize = fLen + sLen;
    }else {
        // Allocate a buffer the size of first data
        buffer = av_malloc(fLen);

        // Copy the first data bytes of fLen into the buffer
        memcpy(buffer, [fNALU bytes], fLen);

        // Update the size of the buffer
        bufferSize = fLen;
    }

    // Initialize the packet
    av_init_packet(&pkt);

    //av_packet_from_data(&pkt, buffer, bufferSize);

    // Set the packet data to the buffer
    pkt.data            = buffer;
    pkt.size            = bufferSize;
    pkt.pts             = pts;

    // Stream index 0 is the video stream
    pkt.stream_index    = 0;

    // Add a key frame flag every 15 frames
    if ((processedFrames % 15) == 0)
        pkt.flags       |= AV_PKT_FLAG_KEY;

    // Write the frame to the stream
    ret = av_interleaved_write_frame(oc, &pkt);
    if (ret < 0) 
        NSLog(@"Error writing frame %i to stream", processedFrames);
    else {
        // Update the number of frames successfully streamed
        frameCount++;
        // Update the number of bytes successfully sent
        bytesSent += pkt.size;
    }

    // Update the number of frames processed
    processedFrames++;
    // Update the number of bytes processed
    processedBytes += pkt.size;

    free((uint8_t*)buffer);
    // Free the packet
    av_free_packet(&pkt);
}

大约100帧后,我得到一个错误: malloc: *** error for object 0xe5bfa0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug

After about 100 or so frames, I get an error: malloc: *** error for object 0xe5bfa0: incorrect checksum for freed object - object was probably modified after being freed. *** set a breakpoint in malloc_error_break to debug

我似乎无法阻止这种事情的发生.我尝试注释掉av_free_packet()方法和free()以及尝试使用av_packet_from_data()而不是初始化数据包并设置数据和大小值.

Which I cannot seem to stop from happening. I've tried commenting out the av_free_packet() method and the free() along with trying to use av_packet_from_data() rather than initializing the packet and setting the data and size values.

我的问题是;我如何才能阻止此错误的发生,并且根据Wireshark,这些都是正确的RTMP h264数据包,但它们只能播放黑屏.我忽略了一些明显的错误吗?

My question is; how can I stop this error from happening, and according to wireshark, these are proper RTMP h264 packets, but they do not play anything more than a black screen. Is there some glaring error that I am overlooking?

推荐答案

在我看来,您的缓冲区溢出并在此处破坏了流:

It looks to me like you are overflowing your buffer and corrupting you stream here:

memcpy(buffer + fLen + 1, [sNALU bytes], sLen);

您要分配fLen + sLen字节,然后写入fLen + sLen + 1字节.摆脱+ 1.

You are allocating fLen + sLen bytes then writing fLen + sLen + 1 bytes. Just get rid of the + 1.

因为您的AVPacket已分配在堆栈上,所以不需要av_free_packet(). 最后,为libav分配额外的字节被认为是一种好习惯. av_malloc(size + FF_INPUT_BUFFER_PADDING_SIZE )

Because your AVPacket is allocated on the stack av_free_packet() is not needed. Finally, it is considered good practice to to allocate extra bytes for libav. av_malloc(size + FF_INPUT_BUFFER_PADDING_SIZE )

这篇关于在iOS上通过RTMP进行H264视频流传输的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆