从连续数据流中播放音频(iOS) [英] Playing audio from a continuous stream of data (iOS)

查看:624
本文介绍了从连续数据流中播放音频(iOS)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我整个早上一直在反对这个问题。

Been banging my head against this problem all morning.

我已经建立了与数据源的连接,该数据源返回音频数据(它是一个录音设备,所以有数据上没有设置长度。数据只是流入。就像,如果你打开一个流到收音机)

I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio)

我已经设法收到了所有的数据包我的代码中的数据。现在我只需要玩它。我想播放即将上传的数据,所以我不想排队几分钟或者其他什么,我想使用我在那个时刻收到的数据并播放它。

and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it.

现在我整个上午都在寻找不同的例子,但没有一个真的已经布局了。

Now I been searching all morning finding different examples but none were really layed out.


  • (void)连接:(NSURLConnection )连接didReceiveData:(NSData )数据{

  • (void)connection:(NSURLConnection )connection didReceiveData:(NSData)data {

函数,data包是音频包。我尝试使用AVPlayer,MFVideoPlayer进行流媒体播放,但到目前为止我还没有任何工作。还试着看看mattgallagher的Audiostreamer,但仍然无法实现它。

function, the "data" package is the audio package. I tried streaming it with AVPlayer, MFVideoPlayer but nothing has worked for me so far. Also tried looking at mattgallagher's Audiostreamer but still was unable to achieve it.

这里的任何人都可以提供帮助,有一些(最好)工作的例子吗?

Anyone here can help, has some (preferably) working examples?

推荐答案

小心:以下答案仅在您从服务器接收PCM数据时有效。这当然永远不会发生。这就是为什么在渲染音频和接收数据之间需要另一步:数据转换。

Careful: The answer below is only valid if you receive PCM data from the server. This is of course never happens. That's why between rendering the audio and receiving the data you need another step: data conversion.

根据格式,这可能或多或少有些棘手,但一般情况下,您应该使用音频转换器服务进行此步骤。

Depending on format, this could be more or less tricky, but in general you should use Audio Converter Services for this step.

您应该使用 - (void)连接:(NSURLConnection)连接didReceiveData:(NSData)数据仅用来填充来自服务器的数据的缓冲区,玩它不应该与这个方法有任何关系。

You should use -(void)connection:(NSURLConnection )connection didReceiveData:(NSData)data only to fill a buffer with the data that comes from the server, playing it should not have anything to do with this method.

现在,要使用缓冲区播放你在内存中存储的数据,你需要使用RemoteIO和音频单位。 这是一本很好的综合教程。您可以从教程中删除记录部分,因为您并不真正需要它。

Now, to play the data you 'stored' in memory using the buffer you need to use RemoteIO and audio units. Here is a good, comprehensive tutorial. You can remove the "record" part from the tutorial as you don't really need it.

如您所见,他们定义了一个回放回放:

As you can see, they define a callback for playback:

callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = self;
status = AudioUnitSetProperty(audioUnit, 
                              kAudioUnitProperty_SetRenderCallback, 
                              kAudioUnitScope_Global, 
                              kOutputBus,
                              &callbackStruct, 
                              sizeof(callbackStruct));

playbackCallback 函数如下所示:

static OSStatus playbackCallback(void *inRefCon, 
                          AudioUnitRenderActionFlags *ioActionFlags, 
                          const AudioTimeStamp *inTimeStamp, 
                          UInt32 inBusNumber, 
                          UInt32 inNumberFrames, 
                          AudioBufferList *ioData) {

    for (int i = 0 ; i < ioData->mNumberBuffers; i++){      
        AudioBuffer buffer = ioData->mBuffers[i];
        unsigned char *frameBuffer = buffer.mData;
        for (int j = 0; j < inNumberFrames*2; j++){
            frameBuffer[j] = getNextPacket();//this here is a function you have to make to get the next chunk of bytes available in the stream buffer
        }
    }

    return noErr;
}

基本上它的作用是填写 ioData 缓冲区,需要播放下一个字节块。如果没有可播放的新数据,请务必将 ioData 缓冲区归零(静音)(如果流缓冲区中的数据不足,播放器将被静音)。

Basically what it does is to fill up the ioData buffer with the next chunk of bytes that need to be played. Be sure to zero out (silence) the ioData buffer if there is no new data to play (the player is silenced if not enough data is in the stream buffer).

此外,您可以使用 alSourceQueueBuffers alSourceUnqueueBuffers 一个接一个地排队缓冲区。

Also, you can achieve the same thing with OpenAL using alSourceQueueBuffers and alSourceUnqueueBuffers to queue buffers one after the other.

就是这样。快乐的编码!

That's it. Happy codding!

这篇关于从连续数据流中播放音频(iOS)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆