如何从 AVCaptureAudioDataOutput 播放音频样本缓冲区 [英] How to play audio sample buffers from AVCaptureAudioDataOutput

查看:31
本文介绍了如何从 AVCaptureAudioDataOutput 播放音频样本缓冲区的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试制作的应用程序的主要目标是点对点视频流.(有点像使用蓝牙/WiFi 的 FaceTime).

The main goal of the app Im trying to make is a peer-to-peer video streaming. (Sort of like FaceTime using bluetooth/WiFi).

使用 AVFoundation,我能够捕获视频/音频样本缓冲区.然后我发送视频/音频样本缓冲区数据.现在的问题是在接收端处理样本缓冲区数据.

Using AVFoundation, I was able to capture video/audio sample buffers. Then Im sending the video/audo sample buffer data. Now the problem is to process the sample buffer data in the receiving side.

至于视频样本缓冲区,我能够从样本缓冲区获取 UIImage.但是对于音频样本缓冲区,我不知道如何处理它以便我可以播放音频.

As for the video sample buffer, I was able to get a UIImage from the sample buffer. But for the audio sample buffer, I dont know how to process it so I can play the audio.

所以问题是如何处理/播放音频样本缓冲区?

现在我只是在绘制波形,就像在苹果的波浪示例代码中一样:

Right now Im just plotting the waveform, just like in apple's Wavy sample code:

CMSampleBufferRef sampleBuffer;

CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;

CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));

int numSamplesToRead = 1;
for (int i = 0; i < numSamplesToRead; i++) {

    SInt16 subSet[numSamples / numSamplesToRead];
    for (int j = 0; j < numSamples / numSamplesToRead; j++)
        subSet[j] = samples[(i * (numSamples / numSamplesToRead)) + j];

    SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples / numSamplesToRead)];
    double scaledSample = (double) ((audioSample / SINT16_MAX));

    // plot waveform using scaledSample
    [updateUI:scaledSample];
}

推荐答案

要显示视频,您可以使用(这里是:获取ARGB图片并转换为Qt(nokia qt)QImage,您可以用其他图像替换)

To show video you can use (here is : getting of ARGB picture and converting to Qt (nokia qt) QImage you can replace by other image)

把它放到委托类

 - (void)captureOutput:(AVCaptureOutput *)captureOutput
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
           fromConnection:(AVCaptureConnection *)connection

<小时>

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress(imageBuffer,0);

SVideoSample sample;

sample.pImage      = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
sample.bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
sample.width       = CVPixelBufferGetWidth(imageBuffer);
sample.height      = CVPixelBufferGetHeight(imageBuffer);

QImage img((unsigned char *)sample.pImage, sample.width, sample.height, sample.bytesPerRow, QImage::Format_ARGB32);

self->m_receiver->eventReceived(img);

CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];

这篇关于如何从 AVCaptureAudioDataOutput 播放音频样本缓冲区的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆