在ffmpeg中去隔行 [英] Deinterlacing in ffmpeg

查看:394
本文介绍了在ffmpeg中去隔行的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已按照此处教程将视频文件加载到C程序中.但是帧并没有去隔行.

I've followed the tutorial here to load video files into a C program. But the frames aren't deinterlaced.

从我所见,ffmpeg可执行文件支持-deinterlace开关.如何在代码中执行此操作?我应该阅读哪些库/功能?

From what I've seen, the ffmpeg executable supports a -deinterlace switch. How to I do this in code? What library/functions should I read about?

推荐答案

您必须手动调用avpicture_deinterlace对每个解码的帧进行去隔行处理.可以在此处中找到函数定义.基本上看起来像这样(使用教程第一页中的变量):

You have to manually call avpicture_deinterlace to deinterlace each decoded frame. The function definition can be found here. It will basically look like this (using the variables from the first page of the tutorial):

avcodec_decode_video(pCodecCtx, pFrame, &frameFinished,
                     packet.data, packet.size);

if(frameFinished) {
    avpicture_deinterlace((AVPicture*)pDiFrame, 
                          (const AVPicture*)pFrame, 
                          pCodecCtx->pix_fmt, 
                          width, 
                          height);
     . 
     . 
     .
 }

请记住,您必须通过创建自己的缓冲区并调用avcodec_alloc_frameavpicture_fill来初始化pDiFrame,类似于初始化教程中的pFrameRGB,仅这次像素格式将是解码的帧(pCodecCtx->pix_fmt),而不是24位RGB.

Keep in mind that you have to initialize pDiFrame similarly to how they initialize pFrameRGB in the tutorial by creating your own buffer and calling avcodec_alloc_frame and avpicture_fill, only this time the pixel format will be that of the decoded frame(pCodecCtx->pix_fmt), not 24-bit RGB.

解交织后,您可以按照本教程中的说明执行从解交织帧到RGB的转换.

After deinterlacing, you can then perform the conversion from the deinterlaced frame to RGB as it shows in the tutorial.

这篇关于在ffmpeg中去隔行的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆