从内存数据流在iOS上播放音频 [英] Play audio on iOS from a memory data stream

查看:44
本文介绍了从内存数据流在iOS上播放音频的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在将音频库移植到iOS,以便播放从回调提供的音频流.用户提供了一个返回原始PCM数据的回调,并且我需要播放此数据.而且,该库必须能够一次播放多个流.

I am porting an audio library to iOS allowing to play audio streams fed from callbacks. The user provides a callback returning raw PCM data, and I need to have this data be played. Moreover, the library must be able to play multiple streams at once.

我认为我需要使用AVFoundation,但似乎AVAudioPlayer不支持流音频缓冲区,并且所有流文档都可以找到直接来自网络的数据.我应该在这里使用什么API?

I figured I would need to use AVFoundation, but it seems like AVAudioPlayer does not support streamed audio buffers, and all the streaming documentation I could find used data coming directly from the network. What is the API I should use here?

提前谢谢!

顺便说一句,我不是通过Swift或Objective-C使用Apple库.但是我认为一切仍然暴露无遗,因此无论如何都要感谢Swift中的示例!

By the way, I am not using the Apple libraries through Swift or Objective-C. However I assume everything is exposed still, so an example in Swift would be greatly appreciated anyway!

推荐答案

您需要初始化:

  1. 使用输入音频单元和输出的音频会话.

  1. The Audio Session to use input audio unit and output.

-(SInt32) audioSessionInitialization:(SInt32)preferred_sample_rate {

    // - - - - - - Audio Session initialization
    NSError *audioSessionError = nil;
    session = [AVAudioSession sharedInstance];

    // disable AVAudioSession
    [session setActive:NO error:&audioSessionError];

    // set category - (PlayAndRecord to use input and output session AudioUnits)
   [session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:&audioSessionError];

   double preferredSampleRate = 441000;
   [session setPreferredSampleRate:preferredSampleRate error:&audioSessionError];

   // enable AVAudioSession
   [session setActive:YES error:&audioSessionError];


   // Configure notification for device output change (speakers/headphones)
   [[NSNotificationCenter defaultCenter] addObserver:self
                                     selector:@selector(routeChange:)
                                         name:AVAudioSessionRouteChangeNotification
                                       object:nil];


   // - - - - - - Create audio engine
   [self audioEngineInitialization];

   return [session sampleRate];
 }

  • 音频引擎

  • The Audio Engine

    -(void) audioEngineInitialization{
    
        engine = [[AVAudioEngine alloc] init];
        inputNode = [engine inputNode];
        outputNode = [engine outputNode];
    
        [engine connect:inputNode to:outputNode format:[inputNode inputFormatForBus:0]];
    
    
        AudioStreamBasicDescription asbd_player;
        asbd_player.mSampleRate        = session.sampleRate;
        asbd_player.mFormatID            = kAudioFormatLinearPCM;
        asbd_player.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
        asbd_player.mFramesPerPacket    = 1;
        asbd_player.mChannelsPerFrame    = 2;
        asbd_player.mBitsPerChannel    = 16;
        asbd_player.mBytesPerPacket    = 4;
        asbd_player.mBytesPerFrame        = 4;
    
        OSStatus status;
        status = AudioUnitSetProperty(inputNode.audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  0,
                                  &asbd_player,
                                  sizeof(asbd_player));
    
    
        // Add the render callback for the ioUnit: for playing
        AURenderCallbackStruct callbackStruct;
        callbackStruct.inputProc = engineInputCallback; ///CALLBACK///
        callbackStruct.inputProcRefCon = (__bridge void *)(self);
        status = AudioUnitSetProperty(inputNode.audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Input,//Global
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    
        [engine prepare];
    }
    

  • 音频引擎回调

  • The Audio Engine callback

    static OSStatus engineInputCallback(void *inRefCon,
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData)
    {
    
        // the reference to the audio controller where you get the stream data
        MyAudioController *ac = (__bridge MyAudioController *)(inRefCon);
    
        // in practice we will only ever have 1 buffer, since audio format is mono
        for (int i = 0; i < ioData->mNumberBuffers; i++) { 
            AudioBuffer buffer = ioData->mBuffers[i];
    
            // copy stream buffer data to output buffer
            UInt32 size = min(buffer.mDataByteSize, ac.playbackBuffer.mDataByteSize); 
            memcpy(buffer.mData, ac.streamBuffer.mData, size);
            buffer.mDataByteSize = size; // indicate how much data we wrote in the buffer
        }
    
        return noErr;
    }
    

  • 这篇关于从内存数据流在iOS上播放音频的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

    查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆