如何更改缓冲区-双缓冲 [英] How to change Buffer-Double Buffering

查看:123
本文介绍了如何更改缓冲区-双缓冲的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

大家好,

我有两个查询:
(i)如何在使用Wave API的语音录制中使用双重缓冲.请向我建议进行双重缓冲的步骤吗?

(ii)我要一个缓冲区来记录声音,一旦它被填满,就应该开始播放,同时我的第二个缓冲区应该负责记录.一旦第二个缓冲区被填满,则应继续播放,第一个缓冲区应进行记录....

请告诉我步骤,以便我可以执行上述操作.感谢您的宝贵意见和事先的答复.期待您的答复

hi Guys,

I have two query :
(i)How to use double buffering in voice recording using wave API.Pls suggest me the steps to do double buffering?

(ii)I want one buffer to record voice , once it get fill it should go for playing and at the same time my second buffer should take care of recording . once the second buffer got fill then it should go for playing, and first buffer should go for recording....

Pls tell me the steps so that i can do the above said. Thanks for your valuable comments and answer in advance . Looking forward for your response

推荐答案

您在这里没有提及它,但是从您先前的问题中我知道您实际上是想在客户端应用程序中播放音频,通过套接字连接接收音频.我只打算在这里处理服务器端.

在我看来,您对API文档有关使用双缓冲的注释的关注过多.是的,您应该这样做是为了避免因缺少音频而导致的时间间隔,从而导致爆裂声"和音频丢失,但不要以此来决定应用程序其余部分的体系结构.

为了分离功能,我建议您为服务器中的以下3个区域设置类(您的服务器显然将具有更多的类和功能,但是我将参考这些类):
-音频源处理程序,负责初始化WaveIn设备并对新捕获的音频数据做出反应.
-文件处理程序,负责创建音频文件,向文件添加新音频,关闭文件等.
-网络处理程序,负责接受传入的客户端连接并将音频数据包发送到连接的客户端.

创建两个缓冲区以捕获WaveIn设备中的音频时,必须确定这些缓冲区的大小.如果它们足够大以包含0.5-1秒或更少的音频,通常会很好.
在初始化期间,您可以使用waveInAddBuffer()和waveInPrepareHeader()引用这两个缓冲区.
现在,每次收到当前缓冲区已满的信号时,应执行以下步骤:
-调用waveInAddBuffer()将音频捕获切换为使用其他缓冲区.
-将刚刚填充的缓冲区副本传递给File处理程序.
-将缓冲区的另一个副本传递给网络处理程序.

我知道这看起来好像很多复制,但是通过这种方式,用于捕获音频的缓冲区始终可以交换,文件处理不会因为网络问题而受到损害,反之亦然.
您需要一种将音频数据块传递到File和Network处理程序的机制,以便Audio Source处理程序可以摆脱数据,而不必担心会发生什么.有几种方法可以做到这一点,除了建议您使用非阻塞方法外,在这里我将不做任何详细说明.


希望对您有所帮助.

Soren Madsen
You do not mention it here, but I know from your previous questions that you actually want to play the audio in a client application, which receives the audio through a socket connection. I am only going to address the server side here.

It seems to me that you have focused too much on the API documentation''s notes about using double buffering. Yes, you should do this to avoid time gaps with missing audio resulting in "pops" and audio dropouts, but do not let this dictate the architecture of the rest of your application.

To separate the functionality, I would suggest you have classes set up for the following 3 areas in your server (your server will obviously have more classes and functionality, but these are the ones I will be referring to):
- Audio Source handler responsible for initializing the WaveIn device and reacting to new audio data being captured.
- File handler responsible for creating audio files, adding new audio to the files, closing files, etc.
- Network handler responsible for accepting incoming client connections and send audio packets to the connected client(s).

When you create the two buffers for capturing the audio from the WaveIn device, you have to decide how large these buffers are going to be. It is usually fine if they are large enough to contain 0.5-1 second of audio or even less.
During initialization, you reference the two buffers using waveInAddBuffer() and waveInPrepareHeader().
Now every time you get the signal that the current buffer is full, you should perform the following steps:
- Call waveInAddBuffer() to switch the audio capturing to use the other buffer.
- Pass a copy of the buffer that has just been filled to the File handler.
- Pass another copy of the buffer to the Network handler.

I know this might seem like a lot of copying, but this way the buffers used for capturing audio are always available for being swapped, file handling is not compromised due to network issues and vice versa.
You need a mechanism for passing a block of audio data to the File and Network handlers so the Audio Source handler can get rid of the data and not worry about what happens to it. There are several ways of doing this and I will not go into any details here, except recommend that you use a non-blocking method.


I hope this helps.

Soren Madsen


这篇关于如何更改缓冲区-双缓冲的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆