从Socket连接iOS上的音频播放 [英] Playing Audio on iOS from Socket connection

查看:413
本文介绍了从Socket连接iOS上的音频播放的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

希望你能帮助我这个问题,我已经看到了很多与此相关的问题,但没有人真的帮助我找出我在做什么错在这里。

Hope you can help me with this issue, I have seen a lot of questions related to this, but none of them really helps me to figure out what I am doing wrong here.

所以在Android上我有一个AudioRecord这是录制音频和发送音频为字节数组通过套接字连接到客户端。这部分是超级容易在Android和工作完美。

So on Android I have an AudioRecord which is recording audio and sending the audio as byte array over a socket connection to clients. This part was super easy on Android and is working perfectly.

当我开始与iOS的工作,我发现外面有没有简单的方法去了解这一点,所以以后的研究和堵塞和播放2天这是我有。仍然不能播放任何音频。它使噪音启动,但被转移没有音频在插座正在播放的时候。我确认插座由缓冲器阵列中登录的每个元素接收数据

When I started working with iOS I found out there is no easy way to go about this, so after 2 days of research and plugging and playing this is what I have got. Which still does not play any audio. It makes a noise when it starts but none of the audio being transferred over the socket is being played. I confirmed that the socket is receiving data by logging each element in the buffer array.

下面是所有code我用的,很多是从一堆网站中重复使用,不记得所有的链接。 (顺便说一句使用AudioUnits)

Here is all the code I am using, a lot is reused from a bunch of sites, can't remember all the links. (BTW using AudioUnits)

先上去,音频处理器:
播放回调

First up, audio processor: Play Callback

static OSStatus playbackCallback(void *inRefCon,
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData) {

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;

    // iterate over incoming stream an copy to output stream
    for (int i=0; i < ioData->mNumberBuffers; i++) {
        AudioBuffer buffer = ioData->mBuffers[i];

        // find minimum size
        UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

        // copy buffer to audio buffer which gets played after function return
        memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

        // set data size
        buffer.mDataByteSize = size;
    }
    return noErr;
}

音频处理器初始化

Audio processor initialize

-(void)initializeAudio
{
    OSStatus status;

    // We define the audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output; // we want to ouput
    desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
    desc.componentFlags = 0; // must be zero
    desc.componentFlagsMask = 0; // must be zero
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider

    // find the AU component by description
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // create audio unit by component
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);

    [self hasError:status:__FILE__:__LINE__];

    // define that we want record io on the input bus
    UInt32 flag = 1;


    // define that we want play on io on the output bus
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO, // use io
                                  kAudioUnitScope_Output, // scope to output
                                  kOutputBus, // select output bus (0)
                                  &flag, // set flag
                                  sizeof(flag));
    [self hasError:status:__FILE__:__LINE__];

    /*
     We need to specifie our format on which we want to work.
     We use Linear PCM cause its uncompressed and we work on raw data.
     for more informations check.

     We want 16 bits, 2 bytes per packet/frames at 44khz
     */
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = SAMPLE_RATE;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 2;
    audioFormat.mBytesPerFrame      = 2;

    // set the format on the output stream
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));

    [self hasError:status:__FILE__:__LINE__];



    /**
     We need to define a callback structure which holds
     a pointer to the recordingCallback and a reference to
     the audio processor object
     */
    AURenderCallbackStruct callbackStruct;

    /*
     We do the same on the output stream to hear what is coming
     from the input stream
     */
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = (__bridge void *)(self);

    // set playbackCallback as callback on our renderer for the output bus
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));

    [self hasError:status:__FILE__:__LINE__];

    // reset flag to 0
    flag = 0;

    /*
     we need to tell the audio unit to allocate the render buffer,
     that we can directly write into it.
     */
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_ShouldAllocateBuffer,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));

    /*
     we set the number of channels to mono and allocate our block size to
     1024 bytes.
     */
    audioBuffer.mNumberChannels = 1;
    audioBuffer.mDataByteSize = 512 * 2;
    audioBuffer.mData = malloc( 512 * 2 );

    // Initialize the Audio Unit and cross fingers =)
    status = AudioUnitInitialize(audioUnit);
    [self hasError:status:__FILE__:__LINE__];

    NSLog(@"Started");

}

幻灯播放

-(void)start;
{
    // start the audio unit. You should hear something, hopefully <img src="http://www.stefanpopp.de/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley">
    OSStatus status = AudioOutputUnitStart(audioUnit);
    [self hasError:status:__FILE__:__LINE__];
}

将数据添加到缓冲

Adding data to the buffer

-(void)processBuffer: (AudioBufferList*) audioBufferList
{
    AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];

    // we check here if the input data byte size has changed
    if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
        // clear old buffer
        free(audioBuffer.mData);
        // assing new byte size and allocate them on mData
        audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
    }
    // loop over every packet
    // copy incoming audio data to the audio buffer
    memcpy(audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);
}

流连接回调(插座)

Stream connection callback (Socket)

-(void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode
{
    if(eventCode == NSStreamEventHasBytesAvailable)
    {
        if(aStream == inputStream) {
            uint8_t buffer[1024];
            UInt32 len;
            while ([inputStream hasBytesAvailable]) {
                len = (UInt32)[inputStream read:buffer maxLength:sizeof(buffer)];
                if(len > 0)
                {
                    AudioBuffer abuffer;

                    abuffer.mDataByteSize = len; // sample size
                    abuffer.mNumberChannels = 1; // one channel
                    abuffer.mData = buffer;

                    int16_t audioBuffer[len];

                    for(int i = 0; i <= len; i++)
                    {
                        audioBuffer[i] = MuLaw_Decode(buffer[i]);
                    }

                    AudioBufferList bufferList;
                    bufferList.mNumberBuffers = 1;
                    bufferList.mBuffers[0] = abuffer;

                    NSLog(@"%", bufferList.mBuffers[0]);

                    [audioProcessor processBuffer:&bufferList];
                }
            }
        }
    }
}

借助 MuLaw_De code

#define MULAW_BIAS 33
int16_t MuLaw_Decode(uint8_t number)
{
    uint8_t sign = 0, position = 0;
    int16_t decoded = 0;
    number =~ number;
    if(number&0x80)
    {
        number&=~(1<<7);
        sign = -1;
    }
    position= ((number & 0xF0) >> 4) + 5;
    decoded = ((1<<position) | ((number&0x0F) << (position - 4)) |(1<<(position-5))) - MULAW_BIAS;
    return (sign == 0) ? decoded : (-(decoded));
}

和打开的连接和初始化音频处理器的code

And the code that opens the connection and initialises the audio processor

CFReadStreamRef readStream;
CFWriteStreamRef writeStream;



CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)@"10.0.0.14", 6000, &readStream, &writeStream);


inputStream = (__bridge_transfer NSInputStream *)readStream;
outputStream = (__bridge_transfer NSOutputStream *)writeStream;

[inputStream setDelegate:self];
[outputStream setDelegate:self];

[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[inputStream open];
[outputStream open];


audioProcessor = [[AudioProcessor alloc] init];
[audioProcessor start];
[audioProcessor setGain:1];

我相信我的code中的问题与套接字连接回调,我不是在做正确的事与数据。

I believe the issue in my code is with the socket connection callback, that I am not doing the right thing with the data.

推荐答案

我最终解决了这个,看看我的答案<一个href=\"http://stackoverflow.com/questions/28340738/playing-raw-pcm-audio-data-coming-from-nsstream/30318859#30318859\">here

I solved this in the end, see my answer here

我打算把code在这里,但是这将是一个很大复制粘贴

I intended putting the code here, but it would be a lot of copy pasting

这篇关于从Socket连接iOS上的音频播放的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆