直接显示示例的ffmpeg音频帧CB imediasample [英] ffmpeg audio frame from directshow sampleCB imediasample

查看:61
本文介绍了直接显示示例的ffmpeg音频帧CB imediasample的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用isamplegrabber sampleCB回调获取音频样本,我可以从imediasample获取缓冲区和缓冲区长度,并且我使用avcodec_fill_audio_frame(frame,ost-> enc-> channels,ost-> enc-> sample_fmt,(uint8_t *)buffer,length,0)制作avframe,但是此帧在我的mux文件中没有任何音频!我认为长度比frame_size小得多.每个人都可以帮我吗?或给我一些例子(如果可能).谢谢

i use isamplegrabber sampleCB callback to get audio sample, i can get buffer and buffer length from imediasample and i use avcodec_fill_audio_frame(frame,ost->enc->channels,ost->enc->sample_fmt,(uint8_t *)buffer,length,0) to make an avframe , but this frame does not make any audio in my mux file! i think the length is very smaller than frame_size. can every one help me please? or give me some example if it is possible. thank you

这是我的samplecb代码:

this is my samplecb code :

     HRESULT AudioSampleGrabberCallBack::SampleCB(double Time, IMediaSample*pSample){   
        BYTE *pBuffer;
        pSample->GetPointer(&pBuffer);
        long BufferLen = pSample->GetActualDataLength();        
        muxer->PutAudioFrame(pBuffer,BufferLen);    
}

这是samplegrabber pin媒体类型:

and this is samplegrabber pin media type :

    AM_MEDIA_TYPE pmt2;
    ZeroMemory(&pmt2, sizeof(AM_MEDIA_TYPE));
    pmt2.majortype = MEDIATYPE_Audio;
    pmt2.subtype = FOURCCMap(0x1602);
    pmt2.formattype = FORMAT_WaveFormatEx;
    hr = pSampleGrabber_audio->SetMediaType(&pmt2);

之后,我使用ffmpeg混合示例处理帧,我认为我只需要更改代码的信号生成部分:

after that i using ffmpeg muxing example to process frames and i think i need only to change the signal generating part of code :

AVFrame *Muxing::get_audio_frame(OutputStream *ost,BYTE* buffer,long length)
{
    AVFrame *frame = ost->tmp_frame;
    int j, i, v;
    uint16_t *q = (uint16_t*)frame->data[0];

    int buffer_size = av_samples_get_buffer_size(NULL, ost->enc->channels,
                                                 ost->enc->frame_size,
                                                 ost->enc->sample_fmt, 0);
//    uint8_t *sample = (uint8_t *) av_malloc(buffer_size);
    av_samples_alloc(&frame->data[0], frame->linesize, ost->enc->channels, ost->enc->frame_size, ost->enc->sample_fmt, 1);
    avcodec_fill_audio_frame(frame, ost->enc->channels, ost->enc->sample_fmt,frame->data[0], buffer_size, 1);

    frame->pts = ost->next_pts;
    ost->next_pts  += frame->nb_samples;

    return frame;
}

推荐答案

代码段建议您使用Sample Grabber获取AAC数据,并且尝试使用FFmpeg的libavformat将其写入文件.这可以解决.

The code snippets suggest you are getting AAC data using Sample Grabber and you are trying to write that into file using FFmpeg's libavformat. This can work out.

您初始化样本采集器,以获取 WAVE_FORMAT_AAC_LATM 格式的音频数据.这种格式的使用范围不广,您有兴趣查看过滤器图以确保Sample Grabber上的上游连接符合您的期望.有可能某种程度上存在一个假装产生AAC-LATM的怪异的过滤器链,而实际情况是数据无效(甚至没有到达抓取器回调).因此,您需要查看过滤器图表(请参见从外部流程加载图表了解您的DirectShow过滤器图),然后使用调试器逐步执行回调以确保您获取数据,这很有意义.

You initialize your sample grabber to get audio data in WAVE_FORMAT_AAC_LATM format. This format is not so wide spread and you are interested in reviewing your filter graph to make sure the upstream connection on the Sample Grabber is such that you expect. There is a chance that somehow there is a weird chain of filter that pretend to produce AAC-LATM and the reality is that the data is invalid (or not even reaching grabber callback). So you need to review the filter graph (see Loading a Graph From an External Process and Understanding Your DirectShow Filter Graph), then step through your callback with debugger to make sure you get the data and it makes sense.

接下来,您应该初始化 AVFormatContext AVStream ,以表明您将以AAC LATM格式写入数据.提供的代码并不表明您做对了.您所指的示例正在使用默认编解码器.

Next thing, you are expected to initialize AVFormatContext, AVStream to indicate that you will be writing data in AAC LATM format. Provided code does not show you are doing it right. The sample you are referring to is using default codecs.

然后,您需要确保传入数据和FFmpeg输出设置在数据是否具有ADTS标头方面是一致的,所提供的代码对此没有任何启示.

Then, you need to make sure that both incoming data and your FFmpeg output setup are in agreement about whether the data has or does not have ADTS headers, the provided code does not shed any light on this.

此外,恐怕您可能未正确准备音频数据.所涉及的样本生成原始音频数据,并使用 avcodec_encode_audio2 应用编码器以生成压缩内容.然后,使用 av_interleaved_write_frame 将压缩后的音频打包发送到写作.您将代码段附加到问题的方式使我觉得您做错了.对于初学者,您仍然没有显示相关代码,这使我认为您在确定哪些代码确切相关方面遇到了麻烦.然后,您要像对待AAC数据一样对待它,就像 get_audio_frame 代码段中的原始PCM音频一样,而您有兴趣回顾FFmpeg示例代码,并考虑到您已经压缩了AAC数据并获得了采样从 avcodec_encode_audio2 调用返回后变薄.这是您应该合并代码和示例的地方.

Furthermore, I am afraid you might be preparing your audio data incorrectly. The sample in question generates raw audio data and applies encoder to produce compressed content using avcodec_encode_audio2. Then a packed with compressed audio is being sent to writing using av_interleaved_write_frame. The way you attached your code snippets to the question makes me thing you are doing it wrong. For starters, you still don't show relevant code which makes me think you have troubles identifying what code is relevant exactly. Then you are dealing with your AAC data as if it was raw PCM audio in get_audio_frame code snippet whereas you are interested in reviewing FFmpeg sample code with the thought in mind that you already have compressed AAC data and sample gets to thins point after return from avcodec_encode_audio2 call. This is where you are supposed to merge your code and the sample.

这篇关于直接显示示例的ffmpeg音频帧CB imediasample的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆