使用AUHAL音频单元将字节写入音频文件 [英] Writing bytes to audio file using AUHAL audio unit

查看:145
本文介绍了使用AUHAL音频单元将字节写入音频文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从从Macbook(内置麦克风)的默认输入设备获得的声音输入中创建一个wav文件.但是,当作为原始数据导入到audacity时,结果文件是完全垃圾.

I am trying to create a wav file from the sound input I get from the default input device of my macbook (built-in mic). However, the resultant file when imported to audacity as raw data is complete garbage.

首先,我初始化音频文件引用,以便以后可以在音频单元输入回调中对其进行写入.

First I initialize the audio file reference so I can later write to it in the audio unit input callback.

   // struct contains audiofileID as member
   MyAUGraphPlayer player = {0};
   player.startingByte = 0;

   // describe a PCM format for audio file
   AudioStreamBasicDescription format =  { 0 };
   format.mBytesPerFrame = 2;
   format.mBytesPerPacket = 2;
   format.mChannelsPerFrame = 1;
   format.mBitsPerChannel = 16;
   format.mFramesPerPacket = 1;
   format.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat;
   format.mFormatID = kAudioFormatLinearPCM;

   CFURLRef myFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFSTR("./test.wav"), kCFURLPOSIXPathStyle, false);
   //CFShow (myFileURL);
   CheckError(AudioFileCreateWithURL(myFileURL,
                                     kAudioFileWAVEType,
                                     &format,
                                     kAudioFileFlags_EraseFile,
                                     &player.recordFile), "AudioFileCreateWithURL failed");

我在这里分配了一些缓冲区,以保存来自AUHAL单元的音频数据.

Here I malloc some buffers to hold the audio data coming in from the AUHAL unit.

UInt32 bufferSizeFrames = 0;
   propertySize = sizeof(UInt32);
   CheckError (AudioUnitGetProperty(player->inputUnit,
                                    kAudioDevicePropertyBufferFrameSize,
                                    kAudioUnitScope_Global,
                                    0,
                                    &bufferSizeFrames,
                                    &propertySize), "Couldn't get buffer frame size from input unit");
   UInt32 bufferSizeBytes = bufferSizeFrames * sizeof(Float32);

   printf("buffer num of frames  %i", bufferSizeFrames);

   if (player->streamFormat.mFormatFlags & kAudioFormatFlagIsNonInterleaved) {

      int offset = offsetof(AudioBufferList, mBuffers[0]);
      int sizeOfAB = sizeof(AudioBuffer);
      int chNum = player->streamFormat.mChannelsPerFrame;

      int inputBufferSize = offset + sizeOfAB * chNum;

      //malloc buffer lists
      player->inputBuffer = (AudioBufferList *)malloc(inputBufferSize);
      player->inputBuffer->mNumberBuffers = chNum;

      for (UInt32 i = 0; i < chNum ; i++) {
         player->inputBuffer->mBuffers[i].mNumberChannels = 1;
         player->inputBuffer->mBuffers[i].mDataByteSize = bufferSizeBytes;
         player->inputBuffer->mBuffers[i].mData = malloc(bufferSizeBytes);
      }
   }

要检查数据是否实际合理,我渲染了音频单元,然后在每个回调中记录了每组帧的前4个字节(4096).原因是要检查这些值是否与麦克风的声音保持一致.当我跟麦克风说话时,我注意到在此存储位置中注销的值与输入相对应.因此,在这方面似乎一切正常:

To check that the data is actually sensible, I render the audio unit and than log the first 4 bytes of each set of frames (4096) in each callback. The reason was to check that the values were in keeping with what was going into the mic. As I would talk into the mic I noticed the logged out values in this location of memory corresponded to the input. So it seems that things are working in that regard:

// render into our buffer
    OSStatus inputProcErr = noErr;
    inputProcErr = AudioUnitRender(player->inputUnit,
                                   ioActionFlags,
                                   inTimeStamp,
                                   inBusNumber,
                                   inNumberFrames,
                                   player->inputBuffer);
    // copy from our buffer to ring buffer

   Float32 someDataL = *(Float32*)(player->inputBuffer->mBuffers[0].mData);
   printf("L2 input: % 1.7f \n",someDataL);

最后,在输入回调中,我将音频字节写入文件.

And finally, in the input callback I write the audio bytes to the file.

 UInt32 numOfBytes = 4096*player->streamFormat.mBytesPerFrame;

   AudioFileWriteBytes(player->recordFile,
                       FALSE,
                       player->startingByte,
                       &numOfBytes,
                       &ioData[0].mBuffers[0].mData);

   player->startingByte += numOfBytes;

所以我还没有弄清楚为什么数据听起来有些小毛病,失真或根本不存在.一件事是,最终的音频文件大约与我实际录制的时间一样长. (按回车将停止音频单元并关闭音频文件).

So I have not figured out why the data comes out sounding glitchy, distorted or not there at all. One thing is that the resultant audio file is about as long as I actually recorded for. (hitting return stops the audio units and closes the audiofile).

我不确定接下来要看什么.有没有人尝试从AUHAL回调中写入音频文件,并且结果相似?

I'm not sure what to look at next. Has anyone attempted writing to an audiofile from the AUHAL callback and had similar results?

推荐答案

您正在格式请求中设置(32位)浮点标志:

You are setting the (32-bit) floating point flag in your format request:

format.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat

然而,WAVE文件通常包含16位整数样本.将32位浮点样本写入16位整数音频文件通常会产生垃圾.

yet WAVE files usually contain 16-bit integer samples. Writing 32-bit float samples into 16-bit integer audio files will usually produce garbage.

这篇关于使用AUHAL音频单元将字节写入音频文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆