在将AVFrame从一种格式转换为另一种格式时,为什么需要缓冲区? [英] Why do we need a buffer while converting AVFrame from one format to another?

查看:92
本文介绍了在将AVFrame从一种格式转换为另一种格式时,为什么需要缓冲区?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我指的是源代码.此处提供的代码段来自代码中的第(114-138)行.这是使用ffmpeg库.谁能解释为什么程序中需要以下代码?

I am referring to this source code . The code snippets provided here are from lines (114-138) in the code . This is using the ffmpeg library . Can anyone explain why is the following code required in the program ?

// Determine required buffer size and allocate buffer
numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,
              pCodecCtx->height);
buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); 

从某种意义上说,我理解以下功能正在将目标帧与缓冲区关联.但是有什么必要呢?

In a sense I understand that the following function is associating the destination frame to the buffer . But what is the necessity ?

avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);  

PS:我尝试删除缓冲区并编译程序.它被编译了.但是它显示以下运行时错误.

PS : I tried removing the buffer and compiling the program . It got compiled . But it is showing the following run time error .

[swscaler @ 0xa06d0a0]错误的dst图像指针
分段错误(核心已转储)

[swscaler @ 0xa06d0a0] bad dst image pointers
Segmentation fault (core dumped)

推荐答案

我认为让您感到困惑的是,似乎AVFrame有两种分配方式.

I think that what puzzles you is that there seem to be two allocations for AVFrame.

首先,用avcodec_alloc_frame()完成,为通用框架及其元数据分配空间.此时,保持帧正确所需的内存仍然未知.

The first, done with avcodec_alloc_frame(), allocates the space for a generic frame and its metadata. At this point the memory required to hold the frame proper is still unknown.

然后从另一个来源填充该帧,然后通过传递widthheight和颜色深度来指定所需的内存:

You then populate that frame from another source, and it is then that you specify how much memory you need by passing width, height and color depth:

numBytes=avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width, pCodecCtx->height);

此时,框架及其内容是两个单独的对象(AVFrame及其 buffer ).您将它们与以下代码放在一起,实际上根本不是转换:

At this point the frame and its content are two separate objects (an AVFrame and its buffer). You put them together with this code, which is not actually a conversion at all:

avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24,
pCodecCtx->width, pCodecCtx->height);

上面的代码要做的是告诉" pFrameRGB:"您是一个RGB-24帧,这个宽度,这个高度,并且您需要的内存在'buffer'"

What the code above does is to "tell" pFrameRGB: " you are a RGB-24 frame, this wide, this tall, and the memory you need is in 'buffer' ".

然后,只有这样,您才可以使用pFrameRGB做任何您想做的事情.否则,您尝试在没有画布的框架上进行绘画,并且绘画会溅下-您将得到一个核心转储.

Then and only then you can do whatever you want with pFrameRGB. Otherwise, you try to paint on a frame without the canvas, and the paint splashes down -- you get a core dump.

一旦有了框架(AVFrame)和画布(缓冲区),就可以使用它:

Once you have the frame (AVFrame) and the canvas (the buffer), you can use it:

// Read frames and save first five frames to disk
i=0;
while(av_read_frame(pFormatCtx, &packet)>=0) {
    // Is this a packet from the video stream?
    if(packet.stream_index==videoStream) {
      // Decode video frame
      avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished,
&packet);

上面的代码提取视频帧并将其解码为pFrame(本机格式).我们可以在此阶段将pFrame保存到磁盘.我们不需要buffer,然后就不能使用pFrameRGB.

The above code extracts a video frame and decodes it into pFrame (which is native format). We could save pFrame to disk at this stage. We would not need buffer, and we could then not use pFrameRGB.

相反,我们使用sws_scale()将帧转换为RGB-24.

Instead we convert the frame to RGB-24 using sws_scale().

要将帧转换为另一种格式,我们将源复制到另一个目标.这既是因为目标帧可能大于源帧可以容纳的帧,又因为某些转换算法需要在未转换源的较大区域上进行操作,因此就地对源进行迁移很麻烦.另外,源帧由库处理,并且可能无法安全地写入.

To convert a frame into another format, we copy the source to a different destination. This is both because the destination frame could be bigger than what can be accommodated by the source frame, and because some conversion algorithms need to operate on larger areas of the untransformed source, so it would be awkward to transmogrify the source in-place. Also, the source frame is handled by the library and might conceivably not be safe to write to.

pFrame/pFrameRGB的data[]指向什么:最初,什么都没有.它们为NULL,这就是为什么使用未初始化的AVframe会导致核心转储的原因.您可以使用avpicture_fill(适合空缓冲区,加上图像格式和大小信息)或其中一个解码功能(功能相同)初始化它们(和linesize[]等).

What does the data[] of pFrame/pFrameRGB point to: initially, nothing. They are NULL, and that is why using a noninitialized AVframe results in a core dump. You initialize them (and linesize[] etc.) using avpicture_fill (that fits in an empty buffer, plus image format and size information) or one of the decode functions (which do the same).

为什么pFrame不需要内存分配:很好的问题.答案在使用的函数的原型和布局中,其中 picture 参数的描述如下:

Why does pFrame not require memory allocation: good question. The answer is in the used function's prototype and layout, where the picture parameter is described thus:

将存储解码视频帧的AVFrame.使用 avcodec_alloc_frame获取一个AVFrame,编解码器将分配内存 以获得实际的位图.使用默认的get/release_buffer(),解码器 释放/重新使用位图.与覆盖 用户决定要获取的内容的get/release_buffer()(需要CODEC_CAP_DR1) 缓冲解码器解码,解码器一旦执行,便告诉用户 不再需要数据,此时用户应用可以 释放/重用/保留内存.

The AVFrame in which the decoded video frame will be stored. Use avcodec_alloc_frame to get an AVFrame, the codec will allocate memory for the actual bitmap. with default get/release_buffer(), the decoder frees/reuses the bitmap as it sees fit. with overridden get/release_buffer() (needs CODEC_CAP_DR1) the user decides into what buffer the decoder decodes and the decoder tells the user once it does not need the data anymore, the user app can at this point free/reuse/keep the memory as it sees fit.

这篇关于在将AVFrame从一种格式转换为另一种格式时,为什么需要缓冲区?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆