试图了解有关C#中NAudio的缓冲区 [英] Trying to understand buffers with regard to NAudio in C#

查看:374
本文介绍了试图了解有关C#中NAudio的缓冲区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是一名化学专业的学生,​​试图在C#中使用NAudio从计算机的麦克风中收集数据(计划稍后再切换到音频端口,以防与如何回答有关).我了解什么是源流,以及NAudio如何使用事件处理程序来了解是否开始从所述流中读取信息,但是当处理从流中读取的数据时,我感到很困惑.据我了解,缓冲区数组从源流中以字节或WAV格式填充(使用AddSamples命令).现在,我要做的就是填充缓冲区并将其内容写在控制台上或进行简单的可视化处理.我似乎无法从缓冲区中取出我的值,并且我尝试将其视为WAV和字节数组.有人可以帮我了解NAudio的工作原理,以及如何从缓冲区提取数据并将其存储为更有用的格式(即double格式)吗? 到目前为止,这是我处理NAudio及其附带的所有代码:

public NAudio.Wave.BufferedWaveProvider waveBuffer = null; // clears buffer 

NAudio.Wave.WaveIn sourceStream = null; // clears source stream

public void startRecording(int samplingFrequency, int deviceNumber, string fileName)
{       
    sourceStream = new NAudio.Wave.WaveIn(); // initializes incoming audio stream
    sourceStream.DeviceNumber = deviceNumber; // specifies microphone device number 
    sourceStream.WaveFormat = new NAudio.Wave.WaveFormat(samplingFrequency, NAudio.Wave.WaveIn.GetCapabilities(deviceNumber).Channels); // specifies sampling frequency, channels

    waveBuffer = new NAudio.Wave.BufferedWaveProvider(sourceStream.WaveFormat); // initializes buffer

    sourceStream.DataAvailable += new EventHandler<NAudio.Wave.WaveInEventArgs>(sourceStream_DataAvailable); // event handler for when incoming audio is available

    sourceStream.StartRecording();

    PauseForMilliSeconds(500); // delay before recording is stopped          

    sourceStream.StopRecording(); // terminates recording
    sourceStream.Dispose();
    sourceStream = null;
}

void sourceStream_DataAvailable(object sender, NAudio.Wave.WaveInEventArgs e)
{
    waveBuffer.AddSamples(e.Buffer, 0, e.BytesRecorded); // populate buffer with audio stream
    waveBuffer.DiscardOnBufferOverflow = true;
}

解决方案

免责声明:我对NAudio没有太多经验.


这取决于您要如何处理音频数据.

如果只想存储或转储数据(是文件目标还是只是控制台),则不需要BufferedWaveProvider.只需在事件处理程序sourceStream_DataAvailable()中直接执行任何操作即可.但是请记住,您以原始字节的形式接收数据,即实际上构成记录的音频的单个帧(又称为样本)的字节数取决于波形格式:

var bytesPerFrame = sourceStream.WaveFormat.BitsPerSample / 8
                  * sourceStream.WaveFormat.Channels

如果要分析数据(例如,使用FFT进行傅立叶分析),则建议使用NAudio的ISampleProvider.该界面隐藏了所有原始字节,位深度的内容,使您可以轻松地逐帧访问数据.

首先从您的BufferedWaveProvider创建一个ISampleProvider,如下所示:

var samples = waveBuffer.ToSampleProvider();

然后可以使用Read()方法访问示例框架.确保使用BufferedWaveProvider上的BufferedBytes属性检查数据是否实际可用:

while (true)
{
    var bufferedFrames = waveBuffer.BufferedBytes / bytesPerFrame;        

    if (bufferedFrames < 1)
        continue;

    var frames = new float[bufferedFrames];
    samples.Read(frames, 0, bufferedFrames);

    DoSomethingWith(frames);
}

因为您想同时做两件事-同时记录和分析音频数据-您应该为此使用两个单独的线程.

有一个小型GitHub项目,该项目使用NAudio 对记录的音频数据进行DTMF分析.您可能想看看如何将它们融合在一起的一些想法.文件 DtmfDetector\Program.cs 是一个很好的起点. /p>


要快速入门,应该使您的输出更加连贯",请尝试以下操作:

将此字段添加到您的班级:

ISampleProvider samples;

将此行添加到方法startRecording():

samples = waveBuffer.ToSampleProvider();

像这样扩展sourceStream_DataAvailable():

void sourceStream_DataAvailable(object sender, NAudio.Wave.WaveInEventArgs e)
{
    waveBuffer.AddSamples(e.Buffer, 0, e.BytesRecorded);
    waveBuffer.DiscardOnBufferOverflow = true;

    var bytesPerFrame = sourceStream.WaveFormat.BitsPerSample / 8
                      * sourceStream.WaveFormat.Channels
    var bufferedFrames = waveBuffer.BufferedBytes / bytesPerFrame;

    var frames = new float[bufferedFrames];
    samples.Read(frames, 0, bufferedFrames);

    foreach (var frame in frames)
        Debug.WriteLine(frame);
}

I'm a chemistry student trying to use NAudio in C# to gather data from my computer's microphone (planning on switching to an audio port later, in case that's pertinent to how this gets answered). I understand what source streams are, and how NAudio uses an event handler to know whether or not to start reading information from said stream, but I get stumped when it comes to working with the data that has been read from the stream. As I understand it, a buffer array is populated in either byte or WAV format from the source stream (with the AddSamples command). For now, all that I'm trying to do is populate the buffer and write its contents on the console or make a simple visualization. I can't seem to get my values out of the buffer, and I've tried treating it as both a WAV and byte array. Can someone give me a hand in understanding how NAudio works from the ground up, and how to extract the data from the buffer and store it in a more useful format (i.e. doubles)? Here's the code I have so far for handling NAudio and all that comes with it:

public NAudio.Wave.BufferedWaveProvider waveBuffer = null; // clears buffer 

NAudio.Wave.WaveIn sourceStream = null; // clears source stream

public void startRecording(int samplingFrequency, int deviceNumber, string fileName)
{       
    sourceStream = new NAudio.Wave.WaveIn(); // initializes incoming audio stream
    sourceStream.DeviceNumber = deviceNumber; // specifies microphone device number 
    sourceStream.WaveFormat = new NAudio.Wave.WaveFormat(samplingFrequency, NAudio.Wave.WaveIn.GetCapabilities(deviceNumber).Channels); // specifies sampling frequency, channels

    waveBuffer = new NAudio.Wave.BufferedWaveProvider(sourceStream.WaveFormat); // initializes buffer

    sourceStream.DataAvailable += new EventHandler<NAudio.Wave.WaveInEventArgs>(sourceStream_DataAvailable); // event handler for when incoming audio is available

    sourceStream.StartRecording();

    PauseForMilliSeconds(500); // delay before recording is stopped          

    sourceStream.StopRecording(); // terminates recording
    sourceStream.Dispose();
    sourceStream = null;
}

void sourceStream_DataAvailable(object sender, NAudio.Wave.WaveInEventArgs e)
{
    waveBuffer.AddSamples(e.Buffer, 0, e.BytesRecorded); // populate buffer with audio stream
    waveBuffer.DiscardOnBufferOverflow = true;
}

解决方案

Disclaimer: I don't have that much experience with NAudio.


It kind of depends on what you want to do with the audio data.

If you simply want to store or dump the data (be it a file target or just the console) then you don't need a BufferedWaveProvider. Just do whatever you want to do directly in the event handler sourceStream_DataAvailable(). But keep in mind that you receive the data as raw bytes, i.e. how many bytes actually constitute a single frame (a.k.a. sample) of the recorded audio depends on the wave format:

var bytesPerFrame = sourceStream.WaveFormat.BitsPerSample / 8
                  * sourceStream.WaveFormat.Channels

If you want to analyze the data (fourier analysis with FFT, for instance) then I suggest to use NAudio's ISampleProvider. This interface hides all the raw byte, bit-depth stuff and lets you access the data frame by frame in an easy manner.

First create an ISampleProvider from your BufferedWaveProvider like so:

var samples = waveBuffer.ToSampleProvider();

You can then access a sample frame with the Read() method. Make sure to check if data is actually available with the BufferedBytes property on your BufferedWaveProvider:

while (true)
{
    var bufferedFrames = waveBuffer.BufferedBytes / bytesPerFrame;        

    if (bufferedFrames < 1)
        continue;

    var frames = new float[bufferedFrames];
    samples.Read(frames, 0, bufferedFrames);

    DoSomethingWith(frames);
}

Because you want to do two things at once -- recording and analyzing audio data concurrently -- you should use two separate threads for this.

There is a small GitHub project that uses NAudio for DTMF analysis of recorded audio data. You might wanna have a look to get some ideas how to bring it all together. The file DtmfDetector\Program.cs there is a good starting point.


For a quick start that should give you "more coherent" output try the following:

Add this field to your class:

ISampleProvider samples;

Add this line to your method startRecording():

samples = waveBuffer.ToSampleProvider();

Extend sourceStream_DataAvailable() like so:

void sourceStream_DataAvailable(object sender, NAudio.Wave.WaveInEventArgs e)
{
    waveBuffer.AddSamples(e.Buffer, 0, e.BytesRecorded);
    waveBuffer.DiscardOnBufferOverflow = true;

    var bytesPerFrame = sourceStream.WaveFormat.BitsPerSample / 8
                      * sourceStream.WaveFormat.Channels
    var bufferedFrames = waveBuffer.BufferedBytes / bytesPerFrame;

    var frames = new float[bufferedFrames];
    samples.Read(frames, 0, bufferedFrames);

    foreach (var frame in frames)
        Debug.WriteLine(frame);
}

这篇关于试图了解有关C#中NAudio的缓冲区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆