强制gstreamer appsink缓冲区仅保留10ms的数据 [英] Force gstreamer appsink buffers to only hold 10ms of data

查看:650
本文介绍了强制gstreamer appsink缓冲区仅保留10ms的数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个gstreamer管道,可将其所有数据拖放到appsink中:

I have a gstreamer pipeline which drops all of its data into an appsink:

command = g_strdup_printf ("autoaudiosrc ! audio/x-raw-int, signed=true, endianness=1234, depth=%d, width=%d, channels=%d, rate=%d !"
                " appsink name=soundSink max_buffers=2 drop=true ",
                  bitDepthIn, bitDepthIn, channelsIn, sampleRateIn);

通常看起来像

autoaudiosrc ! audio/x-raw-int, signed=true, endianness=1234, depth=16, width=16, channels=1, rate=16000 ! appsink name=soundSink max_buffers=2 drop=true

在运行时.

它可以捕获音频,但问题是它倾向于捕获所需的任何随机量的数据,而不是设置大小或时间间隔.因此,对于实例,请求数据的rtp库仅要求960个字节(10ms的48khz/1 1channel/16位深度),但是缓冲区的长度在10ms到26ms之间.这条流水线每个缓冲区仅返回10ms,这一点非常重要.有没有办法做到这一点?这是获取数据的代码.

It captures the audio fine, the problem is that it tends to capture any random amount of data it wants instead of a set size or time interval. So For Instance, the rtp lib that is asking for the data will only ask for 960 bytes (10ms of 48khz/1 1channel/16 bit depth) but the buffers will be anywhere from 10ms to 26ms in length. It is very important that this pipeline only return 10ms per buffer. Is there a way to do this? Here is the code that grabs the data.

void GSTMediaStream::GetAudioInputData(void* data, int max_size, int& written)
{
   if (soundAppSink != NULL) 
   {
         GstBuffer* buffer = gst_app_sink_pull_buffer (GST_APP_SINK (soundAppSink));
         if (buffer) 
         {
               uint bufSize = MIN (GST_BUFFER_SIZE (buffer), max_size);
               uint offset = 0;

               std::cout << "buffer time length is " << GST_BUFFER_DURATION(buffer) << "ns buffer size is " <<  GST_BUFFER_SIZE (buffer)
                       << " while max size is " << max_size << "\n";
               //if max_size is smaller than the buffer, then only grab the last 10ms captured.
               //I am assuming that the reason for the occasional difference is because the buffers are larger
               //in the amount of audio frames than the rtp stream wants.
               if(bufSize > 0)
                 uint offset = GST_BUFFER_SIZE (buffer)- bufSize;

               memcpy (data, buffer->data + offset, bufSize);
               written = bufSize;
               gst_buffer_unref(buffer);
             }
     }
}

更新 好的,所以我将问题缩小到gstreamer的脉冲音频插件. autoaudiosrc正在使用pulsesrc插件进行捕获,无论出于何种原因,脉冲服务器在重新采样后都会变慢.我使用alsasrc进行了测试,它似乎可以在保持10ms缓冲区的同时处理采样率的变化,但问题是它不能让我以单声道捕获音频:只能以立体声捕获.

Update Ok, so I've narrowed the problem down to the pulse audio plugin for gstreamer. The autoaudiosrc is using the pulsesrc plugin for capture and for whatever reason, the pulse server slows down after a few resamplings. I tested with alsasrc and it seems to handle the sample rate changes while keeping the 10ms buffers but the problem is that it will not let me capture the audio in mono: only in stereo.

推荐答案

我摆脱了autoaudiosrc,转而插入alsasrc. pulsesrc插件是导致缓冲区拉动不稳定的阻塞行为的原因,这给了我不同的缓冲区长度.唯一的问题是alsasrc无法捕获到单声道.我通过在管道中添加audioconvert元素来对此进行补救.我最后的管道是:

I got rid of the autoaudiosrc and plugged in alsasrc instead. The pulsesrc plugin was what was causing the erratic blocking behavior on the buffer pull which was giving me varying buffer lengths. The only problem then was that alsasrc wouldn't capture in mono. I remedied that by adding in an audioconvert element to the pipeline. My final pipe was:

alsasrc ! audioconvert ! audio/x-raw-int, signed=true, endianness=1234, depth=16, width=16, channels=1, rate=16000 ! appsink name=soundSink max_buffers=2 drop=true

这给了我所需的缓冲区长度.但是,这将给我带来任何重大的性能问题,因为这将出现在嵌入式设备上?

This gave me the buffer lengths I needed. However, is this going to give me any significant performance issues as this is going to be on an embedded device?

这篇关于强制gstreamer appsink缓冲区仅保留10ms的数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆