以IMA4格式在iPhone上录制单声道 [英] Recording Mono on iPhone in IMA4 format

查看:218
本文介绍了以IMA4格式在iPhone上录制单声道的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在Apple开发者网站上使用SpeakHear示例应用程序来创建一个录音应用程序。我试图使用kAudioFormatAppleIMA4系统常量直接记录到IMA4格式。这被列为可用的格式之一,但每次我设置我的音频格式变量,并传递和设置,我得到一个'FMT?'错误。下面是我用来设置音频格式变量的代码:

  #define kAudioRecordingFormat kAudioFormatAppleIMA4 
#define kAudioRecordingType kAudioFileCAFType
#define kAudioRecordingSampleRate 16000.00
#define kAudioRecordingChannelsPerFrame 1
#define kAudioRecordingFramesPerPacket 1
#define kAudioRecordingBitsPerChannel 16
#define kAudioRecordingBytesPerPacket 2
#define kAudioRecordingBytesPerFrame 2

- (void)setupAudioFormat:(UInt32)formatID {

//获取用于记录
//音频格式的硬件采样率。每次音频路线改变时,采样率
//需要更新。
UInt32 propertySize = sizeof(self.hardwareSampleRate);
$ b OSStatus err = AudioSessionGetProperty(
kAudioSessionProperty_CurrentHardwareSampleRate,
& propertySize,
& hardwareSampleRate
);

if(err!= 0){
NSLog(@AudioRecorder :: setupAudioFormat - error getting audio session property);
}

audioFormat.mSampleRate = kAudioRecordingSampleRate;

NSLog(@Hardware sample rate =%f,self.audioFormat.mSampleRate);

audioFormat.mFormatID = formatID;
audioFormat.mChannelsPerFrame = kAudioRecordingChannelsPerFrame;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = kAudioRecordingFramesPerPacket;
audioFormat.mBitsPerChannel = kAudioRecordingBitsPerChannel;
audioFormat.mBytesPerPacket = kAudioRecordingBytesPerPacket;
audioFormat.mBytesPerFrame = kAudioRecordingBytesPerFrame;




$ p
$ b

这里是我使用该函数的地方:

   - (id)initWithURL:fileURL {
NSLog(@初始化记录器对象。
self = [super init];

if(self!= nil){

//指定记录格式。选项有:
//
// kAudioFormatLinearPCM
// kAudioFormatAppleLossless
// kAudioFormatAppleIMA4
// kAudioFormatiLBC
// kAudioFormatULaw
// kAudioFormatALaw
//
//当瞄准模拟器时,SpeakHere使用线性PCM,无论这里指定的格式为
//。请参阅此文件中的setupAudioFormat:方法。
[self setupAudioFormat:kAudioRecordingFormat];

OSStatus结果= AudioQueueNewInput(
& audioFormat,
recordingCallback,
self,// userData
NULL,//运行循环
NULL,//运行循环模式
0,// flags
& queueObject
);
$ b NSLog(@试图创建新的录制音频队列对象。结果:%f,result);

//从音频队列的音频转换器获取记录格式 -
//该文件可能需要比
更具体的流描述//创建编码器。
UInt32 sizeOfRecordingFormatASBDStruct = sizeof(audioFormat);

AudioQueueGetProperty(
queueObject,
kAudioQueueProperty_StreamDescription,//此常量仅在iPhone操作系统
& audioFormat,
& sizeOfRecordingFormatASBDStruct
);

AudioQueueAddPropertyListener(
[self queueObject],
kAudioQueueProperty_IsRunning,
audioQueuePropertyListenerCallback,
self
);

[self setAudioFileURL:(CFURLRef)fileURL];

[self enableLevelMetering];
}
返回自我;
}

感谢您的帮助!
-Matt

解决方案

我不确定您传递的所有格式标记是否正确; IMA4(其中,IIRC代表IMA ADPCM 4:1)是4位(16位的4:1压缩),带有一些标题。 根据文档for AudioStreamBasicDescription


  • mBytesPerFrame应为0,因为格式是压缩的。

  • mBitsPerChannel应该是0,因为格式是压缩的。
    mFormatFlags应该是0,因为没有什么可以选择的。


根据 afconvert -f caff -t ima4 -c 1 blah.aiff blah.caf 后跟 afinfo blah .caf


  • mBytesPerPacket应该是34,而
  • mFramesPerPacket应该是64.您可以将它们设置为0。



参考算法在原始IMA规范没有什么帮助(这是一个扫描的OCR,网站也是有扫描)。


I'm using the SpeakHear sample app on Apple's developer site to create an audio recording app. I'm attempting to record directly to IMA4 format using the kAudioFormatAppleIMA4 system constant. This is listed as one of the usable formats, but every time I set up my audio format variable and pass and set it, I get a 'fmt?' error. Here is the code I use to set up the audio format variable:

#define kAudioRecordingFormat kAudioFormatAppleIMA4
#define kAudioRecordingType kAudioFileCAFType
#define kAudioRecordingSampleRate 16000.00
#define kAudioRecordingChannelsPerFrame 1
#define kAudioRecordingFramesPerPacket 1
#define kAudioRecordingBitsPerChannel 16
#define kAudioRecordingBytesPerPacket 2
#define kAudioRecordingBytesPerFrame 2

- (void) setupAudioFormat: (UInt32) formatID {

    // Obtains the hardware sample rate for use in the recording
    // audio format. Each time the audio route changes, the sample rate
    // needs to get updated.
    UInt32 propertySize = sizeof (self.hardwareSampleRate);

    OSStatus err = AudioSessionGetProperty (
        kAudioSessionProperty_CurrentHardwareSampleRate,
        &propertySize,
        &hardwareSampleRate
    );

    if(err != 0){
        NSLog(@"AudioRecorder::setupAudioFormat - error getting audio session property");
    }

    audioFormat.mSampleRate = kAudioRecordingSampleRate;

    NSLog (@"Hardware sample rate = %f", self.audioFormat.mSampleRate);

    audioFormat.mFormatID           = formatID;
    audioFormat.mChannelsPerFrame   = kAudioRecordingChannelsPerFrame;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = kAudioRecordingFramesPerPacket;
    audioFormat.mBitsPerChannel     = kAudioRecordingBitsPerChannel;
    audioFormat.mBytesPerPacket     = kAudioRecordingBytesPerPacket;
    audioFormat.mBytesPerFrame      = kAudioRecordingBytesPerFrame;

}

And here is where I use that function:

- (id) initWithURL: fileURL {
    NSLog (@"initializing a recorder object.");
    self = [super init];

    if (self != nil) {

        // Specify the recording format. Options are:
        //
        //      kAudioFormatLinearPCM
        //      kAudioFormatAppleLossless
        //      kAudioFormatAppleIMA4
        //      kAudioFormatiLBC
        //      kAudioFormatULaw
        //      kAudioFormatALaw
        //
        // When targeting the Simulator, SpeakHere uses linear PCM regardless of the format
        //  specified here. See the setupAudioFormat: method in this file.
        [self setupAudioFormat: kAudioRecordingFormat];

        OSStatus result =   AudioQueueNewInput (
                                &audioFormat,
                                recordingCallback,
                                self,                   // userData
                                NULL,                   // run loop
                                NULL,                   // run loop mode
                                0,                      // flags
                                &queueObject
                            );

        NSLog (@"Attempted to create new recording audio queue object. Result: %f", result);

        // get the recording format back from the audio queue's audio converter --
        //  the file may require a more specific stream description than was 
        //  necessary to create the encoder.
        UInt32 sizeOfRecordingFormatASBDStruct = sizeof (audioFormat);

        AudioQueueGetProperty (
            queueObject,
            kAudioQueueProperty_StreamDescription,  // this constant is only available in iPhone OS
            &audioFormat,
            &sizeOfRecordingFormatASBDStruct
        );

        AudioQueueAddPropertyListener (
            [self queueObject],
            kAudioQueueProperty_IsRunning,
            audioQueuePropertyListenerCallback,
            self
        );

        [self setAudioFileURL: (CFURLRef) fileURL];

        [self enableLevelMetering];
    }
    return self;
} 

Thanks for the help! -Matt

解决方案

I'm not sure that all the format flags you're passing are correct; IMA4 (which, IIRC, stands for IMA ADPCM 4:1) is 4-bit (4:1 compression from 16 bits) with some headers.

According to the docs for AudioStreamBasicDescription:

  • mBytesPerFrame should be 0, since the format is compressed.
  • mBitsPerChannel should be 0, since the format is compressed.
  • mFormatFlags should probably be 0, since there is nothing to choose.

Aaccording to afconvert -f caff -t ima4 -c 1 blah.aiff blah.caf followed by afinfo blah.caf:

  • mBytesPerPacket should be 34, and
  • mFramesPerPacket should be 64. You might be able to set these to 0 instead.

The reference algorithm in the original IMA spec is not that helpful (It's an OCR of scans, the site also has the scans).

这篇关于以IMA4格式在iPhone上录制单声道的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆