如何在AVFoundation中静音捕获声音? [英] How to mute the capture sound in AVFoundation?

查看:127
本文介绍了如何在AVFoundation中静音捕获声音?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在没有任何声音的情况下使用AVfoundation获取图像(是的,我记住,用户选择将实现此功能)

I want to take the image using AVfoundation without any sound (yes, I have kept in mind, user choice will implement this feature)

关于堆栈溢出的两个问题提供了最多信息:

Two questions on stack overflow which gave the most information:

  • AVFoundation, how to turn off the shutter sound when captureStillImageAsynchronouslyFromConnection? (no answer accepted and confirmed)

在iPhone上静音AVCapture快门声音 (建议 AVCaptureVideoDataOutput

两个答案都是指拍摄视频帧,我认为是正确的。问题是AVfoundation库不是很容易掌握,我不能真正掌握它(使用 AVCaptureStillImageOutput 捕获图像对我来说本身很难)。 / p>

Both of the answer refers to capturing video frame, which I believe is correct. The problem is that AVfoundation library isn't really easy to master and I can't really get the hang of it (capturing image using AVCaptureStillImageOutput was itself tough for me).

推荐答案

我在这里找到了代码..

i found the code to do it here..

< a href =http://www.benjaminloulier.com/articles/ios4-and-direct-access-to-the-camera =noreferrer> http://www.benjaminloulier.com/articles/ios4-和 - 直接访问相机

概述的重要部分..

设置你的会话这样

-(void)initialize_and_Start_Session_without_CaptureSound
{
    /*We setup the input*/
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                          error:nil];
    /*We setupt the output*/
    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    /*While a frame is processes in -captureOutput:didOutputSampleBuffer:fromConnection: delegate methods no other frames are added in the queue.
     If you don't want this behaviour set the property to NO */
    captureOutput.alwaysDiscardsLateVideoFrames = YES; 
    /*We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting
     in the queue because it can cause memory issues). It is similar to the inverse of the maximum framerate.
     In this example we set a min frame duration of 1/10 seconds so a maximum framerate of 10fps. We say that
     we are not able to process more than 10 frames per second.*/
    //captureOutput.minFrameDuration = CMTimeMake(1, 10);

    /*We create a serial queue to handle the processing of our frames*/
    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);
    // Set the video output to store frame in BGRA (It is supposed to be faster)
    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 
    /*And we create a capture session*/
    self.session = [[AVCaptureSession alloc] init];
    /*We add input and output*/
    [self.session addInput:captureInput];
    [self.session addOutput:captureOutput];


    /*We start the capture*/
    [self.session startRunning];

}

您将获得以下相机输出方法..
我制作一张图片并将其添加到我的父视图中。您可以将其更改为您的需要

#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection 
{ 
    /*We create an autorelease pool because as we are not in the main_queue our code is
     not executed in the main thread. So we have to create an autorelease pool for the thread we are in*/
    if (captureImageNow)
    {

    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    /*Lock the image buffer*/
    CVPixelBufferLockBaseAddress(imageBuffer,0); 
    /*Get information about the image*/
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer);  

    /*Create a CGImageRef from the CVImageBufferRef*/
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext); 

    /*We release some components*/
    CGContextRelease(newContext); 
    CGColorSpaceRelease(colorSpace);

    /*We display the result on the custom layer. All the display stuff must be done in the main thread because
     UIKit is no thread safe, and as we are not in the main thread (remember we didn't use the main_queue)
     we use performSelectorOnMainThread to call our CALayer and tell it to display the CGImage.*/

    /*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly).
     Same thing as for the CALayer we are not in the main thread so ...*/
    self.captureImage = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];

    /*We relase the CGImageRef*/
    CGImageRelease(newImage);


    [self performSelectorOnMainThread:@selector(AddImageToParentView) withObject:nil waitUntilDone:YES];

    /*We unlock the  image buffer*/
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    [pool drain];
        captureImageNow = NO;
    }
} 

这篇关于如何在AVFoundation中静音捕获声音?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆