关键帧之间的FFMPEG解码工件 [英] FFMPEG decoding artifacts between keyframes

查看:152
本文介绍了关键帧之间的FFMPEG解码工件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当使用ffmpegs api对视频进行解码时,我正在遇到工件。在我认为是中间帧的情况下,人造物只能从帧中的主动移动缓慢地构建。这些工件生成50-100帧,直到我假设一个关键帧重置它们。框架然后被正确地解码,并且工件进行再次构建。

I'm currently experiencing artifacts when decoding video using ffmpegs api. On what I would assume to be intermediate frames, artifacts build slowly only from active movement in the frame. These artifacts build for 50-100 frames until I assume a keyframe resets them. Frames are then decoded correctly and the artifacts proceed to build again.

有一件事令我困扰的是我有几个30fps(h264)的视频示例正常工作,但是我所有的60fps视频(h264)都会遇到问题。

One thing that is bothering me is I have a few video samples that are 30fps(h264) that work correctly, but all of my 60fps videos(h264) experience the problem.

我目前没有足够的声誉来发布图像,所以希望这个链接可以正常工作。
http://i.imgur.com/PPXXkJc.jpg

I don't currently have enough reputation to post an image, so hopefully this link will work. http://i.imgur.com/PPXXkJc.jpg

int numBytes;
int frameFinished;
AVFrame* decodedRawFrame;
AVFrame* rgbFrame;

    //Enum class for decoding results, used to break decode loop when a frame is gathered
DecodeResult retResult = DecodeResult::Fail;

decodedRawFrame = av_frame_alloc();
rgbFrame = av_frame_alloc();
if (!decodedRawFrame) {
    fprintf(stderr, "Could not allocate video frame\n");
    return DecodeResult::Fail;
}

numBytes = avpicture_get_size(PIX_FMT_RGBA, mCodecCtx->width,mCodecCtx->height);
uint8_t* buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));

avpicture_fill((AVPicture *) rgbFrame, buffer, PIX_FMT_RGBA, mCodecCtx->width, mCodecCtx->height);

AVPacket packet;

while(av_read_frame(mFormatCtx, &packet) >= 0 && retResult != DecodeResult::Success)
{
    // Is this a packet from the video stream?
    if (packet.stream_index == mVideoStreamIndex)
    {
        // Decode video frame
        int decodeValue = avcodec_decode_video2(mCodecCtx, decodedRawFrame, &frameFinished, &packet);

        // Did we get a video frame?
        if (frameFinished)// && rgbFrame->pict_type != AV_PICTURE_TYPE_NONE )
        {
            // Convert the image from its native format to RGB
            int SwsFlags = SWS_BILINEAR;
            // Accurate round clears up a problem where the start
                            // of videos have green bars on them
            SwsFlags |= SWS_ACCURATE_RND;
            struct SwsContext *ctx = sws_getCachedContext(NULL, mCodecCtx->width, mCodecCtx->height, mCodecCtx->pix_fmt, mCodecCtx->width, mCodecCtx->height, 
                PIX_FMT_RGBA, SwsFlags, NULL, NULL, NULL);
            sws_scale(ctx, decodedRawFrame->data, decodedRawFrame->linesize, 0, mCodecCtx->height, rgbFrame->data, rgbFrame->linesize);

            //if(count%5 == 0 && count < 105)
            //  DebugSavePPMImage(rgbFrame, mCodecCtx->width, mCodecCtx->height, count);

            ++count;
            // Viewable frame is a struct to hold buffer and frame together in a queue
            ViewableFrame frame;
            frame.buffer = buffer;
            frame.frame = rgbFrame;
            mFrameQueue.push(frame);


            retResult = DecodeResult::Success;

            sws_freeContext(ctx);
        }
    }

    // Free the packet that was allocated by av_read_frame
    av_free_packet(&packet);
}

// Check for end of file leftover frames
if(retResult != DecodeResult::Success)
{
    int result = av_read_frame(mFormatCtx, &packet);
    if(result < 0)
        isEoF = true;
    av_free_packet(&packet); 
}   

// Free the YUV frame
av_frame_free(&decodedRawFrame);

我正在尝试构建解码帧的队列,然后根据需要自由使用。我的分离帧是否导致中间帧被错误解码?一旦我成功收集了一个框架,我也会解码循环(Decode :: Success,我看到的大多数例子都会遍历整个视频。

I'm attempting to build a queue of the decoded frames that I then use and free as needed. Is my seperation of the frames causing the intermediate frames to be decoded incorrectly? I also break the decoding loop once I've successfully gathered a frame(Decode::Success, most examples I've seen tend to loop through the whole video.

全部编解码器连接,视频流信息和格式上下文的设置完全如 https://github.com/chelyaev/ffmpeg-tutorial/blob/master/tutorial01.c

All codec contect, video stream information, and format contexts are setup up exactly as shown in the main function of https://github.com/chelyaev/ffmpeg-tutorial/blob/master/tutorial01.c

任何建议将不胜感激

推荐答案

如果有人发现自己处于相似的位置,请参考。显然,与一些旧版本的FFMPEG有一个问题使用sws_scale来转换图像,而不是更改最终帧的实际尺寸,而是使用以下方式为SwsContext创建一个标志:

For reference if someone finds themselves in a similar position. Apparently with some of the older versions of FFMPEG there's an issue when using sws_scale to convert an image and not changing the actual dimensions of the final frame. If instead you create a flag for the SwsContext using:

int SwsFlags = SWS_BILINEAR; //无论你想要什么
SwsFlags | = SWS_ACCURATE_RND; //在引擎罩下强制ffmpeg使用相同的逻辑,如果缩放

int SwsFlags = SWS_BILINEAR; //Whatever you want SwsFlags |= SWS_ACCURATE_RND; // Under the hood forces ffmpeg to use the same logic as if scaled

SWS_ACCURATE_RND具有性能损失,但对于常规视频,可能不那么明显。这将消除绿色或绿色条纹沿着纹理的边缘(如果存在)。

SWS_ACCURATE_RND has a performance penalty but for regular video it's probably not that noticeable. This will remove the splash of green, or green bars along the edges of textures if present.

我要感谢多媒体迈克和乔治Y,他们也是对的我正在解码帧的方式不是正确地保存数据包,而是导致从前一帧构建的视频伪像。

I wanted to thank Multimedia Mike, and George Y, they were also right in that the way I was decoding the frame wasn't preserving the packets correctly and that was what caused the video artifacts building from previous frames.

这篇关于关键帧之间的FFMPEG解码工件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆