如何写在iPhone上使用AudioBuffer本地麦克风录制的音频文件? [英] How to write audio file locally recorded from microphone using AudioBuffer in iPhone?

查看:778
本文介绍了如何写在iPhone上使用AudioBuffer本地麦克风录制的音频文件?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是新来的音频框架,任何人都帮我写它是由从麦克风捕捉播放音频文件?

I am new to Audio framework, anyone help me to write the audio file which is playing by capturing from microphone?

下面是code到iphone扬声器播放麦克风输入,现在我想保存在iPhone上的音频以供将来使用。

below is the code to play mic input through iphone speaker, now i would like to save the audio in iphone for future use.

我找到了code从这里使用录制音频麦克风的http:/ /www.stefanpopp.de/2011/capture-iphone-microphone/

i found the code from here to record audio using microphone http://www.stefanpopp.de/2011/capture-iphone-microphone/

/**

Code start from here for playing the recorded voice 

*/

static OSStatus playbackCallback(void *inRefCon, 
                                 AudioUnitRenderActionFlags *ioActionFlags, 
                                 const AudioTimeStamp *inTimeStamp, 
                                 UInt32 inBusNumber, 
                                 UInt32 inNumberFrames, 
                                 AudioBufferList *ioData) {    

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

    // iterate over incoming stream an copy to output stream
    for (int i=0; i < ioData->mNumberBuffers; i++) { 
        AudioBuffer buffer = ioData->mBuffers[i];

        // find minimum size
        UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

        // copy buffer to audio buffer which gets played after function return
        memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

        // set data size
        buffer.mDataByteSize = size; 

         // get a pointer to the recorder struct variable
Recorder recInfo = audioProcessor.audioRecorder;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo.running) {
    audioErr = AudioFileWriteBytes (recInfo.recordFile,
                                    false,
                                    recInfo.inStartingByte,
                                    &size,
                                    &buffer.mData);
    assert (audioErr == noErr);
    // increment our byte count
    recInfo.inStartingByte += (SInt64)size;// size should be number of bytes
    audioProcessor.audioRecorder = recInfo;

     }
    }

    return noErr;
}

- (无效)$ P $ {ppareAudioFileToRecord

-(void)prepareAudioFileToRecord{

NSArray *paths =             NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;

NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
//    long timestamp = (digits * 1000) + decimalDigits;
NSString *timeStampValue = [NSString stringWithFormat:@"%ld",digits];
//    NSString *timeStampValue = [NSString stringWithFormat:@"%ld.%d",digits ,decimalDigits];


NSString *fileName = [NSString stringWithFormat:@"test%@.caf",timeStampValue];
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// modify the ASBD (see EDIT: towards the end of this post!)
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileCAFType,
                                  &audioFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile);


assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
self.audioRecorder = audioRecorder;

}

在此先感谢
巴拉

thanks in advance bala

推荐答案

要写入从AudioBuffer字节到一个文件在本地,我们需要从 AudioFileServices <一的帮助href=\"https://developer.apple.com/library/ios/documentation/MusicAudio/Reference/AudioFileConvertRef/Reference/reference.html\"相对=nofollow>链接它包含在 AudioToolbox 的框架类。

To write the bytes from an AudioBuffer to a file locally we need the help from the AudioFileServices link class which is included in the AudioToolbox framework.

在概念上,我们会做如下 - 建立一个音频文件,并维持对它的引用(我们需要这个基准是从您在您的文章包括渲染回调访问的)。我们还需要跟踪每个都调用回调的时间写入的字节数。最后一个标志,检查是否可以让我们知道要停止写入文件,并关闭文件。

Conceptually we will do the following - set up an audio file and maintain a reference to it (we need this reference to be accessible from the render callback that you included in your post). We also need to keep track of the number of bytes that are written for each time the callback is called. Finally a flag to check that will let us know to stop writing to file and close the file.

由于您所提供的链接,code声明了一个 AudioStreamBasicDescription 这是LPCM,因此恒定比特率,我们可以使用 AudioFileWriteBytes 功能(编写COM pressed音频更多地参与,并会使用AudioFileWritePackets函数代替的)。

Because the code in the link you provided declares an AudioStreamBasicDescription which is LPCM and hence constant bit rate, we can use the AudioFileWriteBytes function (writing compressed audio is more involved and would use AudioFileWritePackets function instead).

让我们通过声明一个自定义的结构(它包含了所有我们需要额外的数据的),并加入该自定义结构的实例变量,也使指向结构变量的属性开始。我们会从回调,你在这一行的类型转换中添加这在 SC7314音频处理器自定义类,因为你已经可以访问这个对象。

Let's start by declaring a custom struct (which contains all the extra data we'll need) and adding an instance variable of this custom struct and also making a property that points to the struct variable. We'll add this to the AudioProcessor custom class, as you already have access to this object from within the callback where you typecast in this line.

AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

此加入 AudioProcessor.h (@interface在上面)

Add this to AudioProcessor.h (above the @interface)

typedef struct Recorder {
AudioFileID recordFile;
SInt64 inStartingByte;
Boolean running;
} Recorder;

现在,让我们添加一个实例变量,也使它成为一个指针属性,并将其分配给实例变量(,所以我们可以从回调函数中访问的)。
在@interface添加名为实例变量的 audioRecorder 并也使ASBD提供给类。

Now let's add an instance variable and also make it a pointer property and assign it to the instance variable (so we can access it from within the callback function). In the @interface add an instance variable named audioRecorder and also make the ASBD available to the class.

Recorder audioRecorder;
AudioStreamBasicDescription recordFormat;// assign this ivar to where the asbd is created in the class

在该方法的 - (无效)initializeAudio 注释掉或删除此行,我们recordFormat伊娃做出

In the method -(void)initializeAudio comment out or delete this line as we have made recordFormat an ivar.

//AudioStreamBasicDescription recordFormat;

现在添加 kAudioFormatFlagIsBigEndian 格式的标志,其中ASBD设置。

Now add the kAudioFormatFlagIsBigEndian format flag to where the ASBD is set up.

// also modify the ASBD in the AudioProcessor classes -(void)initializeAudio method (see EDIT: towards the end of this post!)
    recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

最后将其添加为这是一个指针在 audioRecorder 实例变量的属性,不要忘记来合成它的 AudioProcessor.m 。我们将其命名为指针属性的 audioRecorderPointer

And finally add it as a property that is a pointer to the audioRecorder instance variable and don't forget to synthesise it in AudioProcessor.m. We will name the pointer property audioRecorderPointer

@property Recorder *audioRecorderPointer;

// in .m synthesise the property
@synthesize audioRecorderPointer;

现在,让我们的指针分配给伊娃(这可以被放置在 - (无效)initializeAudio 的方法在 SC7314音频处理器类)

Now let's assign the pointer to the ivar (this could be placed in the -(void)initializeAudio method of the AudioProcessor class)

// ASSIGN POINTER PROPERTY TO IVAR
self.audioRecorderPointer = &audioRecorder;

现在在 AudioProcessor.m 让我们添加一个方法来设置文件并打开它,所以我们可以写入。启动AUGraph运行前这应该被调用。

Now in the AudioProcessor.m let's add a method to setup the file and open it so we can write to it. This should be called before you start the AUGraph running.

-(void)prepareAudioFileToRecord {
// lets set up a test file in the documents directory
    NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
    NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
    NSString *fileName = @"test_recording.aif";
    NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
    NSURL *fileURL = [NSURL fileURLWithPath:filePath];

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileAIFFType,
                                  recordFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile)
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
}

好吧,我们几乎没有。现在我们有一个文件写入,以及 AudioFileID ,可以从渲染回调访问。所以,你贴添加下面的权利,你在方法的最后返回诺尔前的回调函数内。

Okay, we are nearly there. Now we have a file to write to, and an AudioFileID that can be accessed from the render callback. So inside the callback function you posted add the following right before you return noErr at the end of the method.

// get a pointer to the recorder struct instance variable
Recorder *recInfo = audioProcessor.audioRecorderPointer;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo->running) {
audioErr = AudioFileWriteBytes (recInfo->recordFile,
                                false,
                                recInfo->inStartingByte,
                                &size,
                                buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo->inStartingByte += (SInt64)size;// size should be number of bytes
}

当我们要停止录制(的可能通过某个用户操作调用的),只是让正在运行的布尔值false和关闭文件这样的某处SC7314音频处理器类。

When we want to stop recording (probably invoked by some user action), simply make the running boolean false and close the file like this somewhere in the AudioProcessor class.

audioRecorder.running = false;
OSStatus audioErr = AudioFileClose(audioRecorder.recordFile);
assert (audioErr == noErr);

编辑:样品的字节顺序必须是该文件大端所以加在 kAudioFormatFlagIsBigEndian 位掩码标志的ASBD源$ C ​​$ C发现在问题中提供的链接。

the endianness of the samples need to be big endian for the file so add the kAudioFormatFlagIsBigEndian bit mask flag to the ASBD in the source code found at the link provided in question.

有关此主题的额外的信息苹果文档是一个很好的资源,我还建议由克里斯·亚当森和凯文阿维拉(其中我自己复印件)。

For extra info about this topic the Apple documents are a great resource and I also recommend reading 'Learning Core Audio' by Chris Adamson and Kevin Avila (of which I own a copy).

这篇关于如何写在iPhone上使用AudioBuffer本地麦克风录制的音频文件?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆