具有多个预览的AVCaptureSession [英] AVCaptureSession with multiple previews

查看:108
本文介绍了具有多个预览的AVCaptureSession的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个使用AVCaptureVideoPreviewLayer运行的AVCaptureSession。

I have an AVCaptureSession running with an AVCaptureVideoPreviewLayer.

我可以看到视频,所以我知道它正在运行。

I can see the video so I know it's working.

但是,我希望有一个集合视图,并在每个单元格中添加一个预览图层,以便每个单元格显示视频的预览。

However, I'd like to have a collection view and in each cell add a preview layer so that each cell shows a preview of the video.

如果我尝试将预览图层传递到单元格并将其添加为子图层然后从其他单元格中删除图层,以便它一次只显示在一个单元格中。

If I try to pass the preview layer into the cell and add it as a subLayer then it removes the layer from the other cells so it only ever displays in one cell at a time.

还有另一种(更好的)方法吗?

Is there another (better) way of doing this?

推荐答案

我遇到了需要在其中显示多个实时视图的同样问题同一时间。上面使用UIImage的答案对我需要的东西来说太慢了。以下是我找到的两个解决方案:

I ran into the same problem of needing multiple live views displayed at the same time. The answer of using UIImage above was too slow for what I needed. Here are the two solutions I found:

第一个选项是使用 CAReplicatorLayer 自动复制图层。正如文档所说,它将自动创建...其子层(源层)的指定数量的副本,每个副本可能应用了几何,时间和颜色转换。

The first option is to use a CAReplicatorLayer to duplicate the layer automatically. As the docs say, it will automatically create "...a specified number of copies of its sublayers (the source layer), each copy potentially having geometric, temporal and color transformations applied to it."

如果除了简单的几何或颜色转换(Think Photo Booth)之外没有很多与实时预览的交互,这是非常有用的。我经常看到CAReplicatorLayer用作创建反射效果的方法。

This is super useful if there isn't a lot of interaction with the live previews besides simple geometric or color transformations (Think Photo Booth). I have most often seen the CAReplicatorLayer used as a way to create the 'reflection' effect.

以下是一些复制CACaptureVideoPreviewLayer的示例代码:

Here is some sample code to replicate a CACaptureVideoPreviewLayer:

AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height / 4)];



初始化CAReplicatorLayer并设置属性



注意:这将复制实时预览图层四次次。

NSUInteger replicatorInstances = 4;

CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height / replicatorInstances);
replicatorLayer.instanceCount = instances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.view.bounds.size.height / replicatorInstances, 0.0);



添加图层



注意:根据我的经验,您需要将要复制的图层作为子图层添加到CAReplicatorLayer。

[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];



下行



使用CAReplicatorLayer的缺点是它处理层复制的所有放置。因此,它将对每个实例应用任何设置转换,并且它将全部包含在其自身内。 E.g。将无法在两个单独的单元格上复制AVCaptureVideoPreviewLayer。


这种方法虽然有点复杂,但却解决了上面提到的CAReplicatorLayer的缺点。通过手动呈现实时预览,您可以根据需要呈现任意数量的视图。当然,性能可能会受到影响。

This method, albeit a tad more complex, solves the above mentioned downside of CAReplicatorLayer. By manually rendering the live previews, you are able to render as many views as you want. Granted, performance might be affected.

注意:可能还有其他方法来渲染SampleBuffer但我选择OpenGL是因为它的性能。代码的灵感来源于 CIFunHouse

以下是我实现它的方式:

Here is how I implemented it:

_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

// Note: must be done after the all your GLKViews are properly set up
_ciContext = [CIContext contextWithEAGLContext:_eaglContext
                                       options:@{kCIContextWorkingColorSpace : [NSNull null]}];



调度队列



此队列将是用于会话和委托。

Dispatch Queue

This queue will be used for the session and delegate.

self.captureSessionQueue = dispatch_queue_create("capture_session_queue", NULL);



初始化您的AVSession& AVCaptureVideoDataOutput



注意:我已删除所有设备功能检查以使其更具可读性。

dispatch_async(self.captureSessionQueue, ^(void) {
    NSError *error = nil;

    // get the input device and also validate the settings
    NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];

    AVCaptureDevice *_videoDevice = nil;
    if (!_videoDevice) {
        _videoDevice = [videoDevices objectAtIndex:0];
    }

    // obtain device input
    AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];

    // obtain the preset and validate the preset
    NSString *preset = AVCaptureSessionPresetMedium;

    // CoreImage wants BGRA pixel format
    NSDictionary *outputSettings = @{(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};

    // create the capture session
    self.captureSession = [[AVCaptureSession alloc] init];
    self.captureSession.sessionPreset = preset;
    :

注意:以下代码是魔术代码。这是我们创建并向AVSession添加DataOutput的地方,因此我们可以使用委托截取相机帧。这是我需要弄清楚如何解决问题的突破。

    :
    // create and configure video data output
    AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
    videoDataOutput.videoSettings = outputSettings;
    [videoDataOutput setSampleBufferDelegate:self queue:self.captureSessionQueue];

    // begin configure capture session
    [self.captureSession beginConfiguration];

    // connect the video device input and video data and still image outputs
    [self.captureSession addInput:videoDeviceInput];
    [self.captureSession addOutput:videoDataOutput];

    [self.captureSession commitConfiguration];

    // then start everything
    [self.captureSession startRunning];
});



2.2 OpenGL Views



我们正在使用GLKView渲染我们的实时预览。因此,如果你想要4个实时预览,那么你需要4个GLKView。

2.2 OpenGL Views

We are using GLKView to render our live previews. So if you want 4 live previews, then you need 4 GLKView.

self.livePreviewView = [[GLKView alloc] initWithFrame:self.bounds context:self.eaglContext];
self.livePreviewView = NO;

因为后置摄像头的原生视频图像位于UIDeviceOrientationLandscapeLeft(即主页按钮位于右侧) ),我们需要应用顺时针90度变换,以便我们可以绘制视频预览,就好像我们处于横向视图中一样;如果您正在使用前置摄像头并且想要进行镜像预览(以便用户在镜像中看到自己),则需要应用额外的水平翻转(通过将CGAffineTransformMakeScale(-1.0,1.0)连接到旋转转换)

Because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)

self.livePreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
self.livePreviewView.frame = self.bounds;    
[self addSubview: self.livePreviewView];

绑定帧缓冲区以获取帧缓冲区的宽度和高度。绘制到GLKView时CIContext使用的边界是以像素为单位(而不是点),因此需要从帧缓冲区的宽度和高度读取。

Bind the frame buffer to get the frame buffer width and height. The bounds used by CIContext when drawing to a GLKView are in pixels (not points), hence the need to read from the frame buffer's width and height.

[self.livePreviewView bindDrawable];

此外,由于我们将访问另一个队列中的边界(_captureSessionQueue),我们希望获得这条信息使我们不会从另一个线程/队列访问_videoPreviewView的属性。

In addition, since we will be accessing the bounds in another queue (_captureSessionQueue), we want to obtain this piece of information so that we won't be accessing _videoPreviewView's properties from another thread/queue.

_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;

dispatch_async(dispatch_get_main_queue(), ^(void) {
    CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);        

    // *Horizontally flip here, if using front camera.*

    self.livePreviewView.transform = transform;
    self.livePreviewView.frame = self.bounds;
});

注意:如果您使用前置摄像头,可以像这样水平翻转实时预览:

transform = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(-1.0, 1.0));



2.3代表实施



我们拥有设置上下文,会话和GLKViews我们现在可以从 AVCaptureVideoDataOutputSampleBufferDelegate 方法渲染我们的视图captureOutput:didOutputSampleBuffer:fromConnection:

2.3 Delegate Implementation

After we have the Contexts, Sessions, and GLKViews set up we can now render to our views from the AVCaptureVideoDataOutputSampleBufferDelegate method captureOutput:didOutputSampleBuffer:fromConnection:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);

    // update the video dimensions information
    self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];

    CGRect sourceExtent = sourceImage.extent;
    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;

您需要引用每个GLKView及其videoPreviewViewBounds。为了方便起见,我假设它们都包含在UICollectionViewCell中。您需要根据自己的用例更改此设置。

You will need to have a reference to each GLKView and it's videoPreviewViewBounds. For easiness, I will assume they are both contained in a UICollectionViewCell. You will need to alter this for your own use-case.

    for(CustomLivePreviewCell *cell in self.livePreviewCells) {
        CGFloat previewAspect = cell.videoPreviewViewBounds.size.width  / cell.videoPreviewViewBounds.size.height;

        // To maintain the aspect radio of the screen size, we clip the video image
        CGRect drawRect = sourceExtent;
        if (sourceAspect > previewAspect) {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
            drawRect.size.width = drawRect.size.height * previewAspect;
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }

        [cell.livePreviewView bindDrawable];

        if (_eaglContext != [EAGLContext currentContext]) {
            [EAGLContext setCurrentContext:_eaglContext];
        }

        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // set the blend mode to "source over" so that CI will use that
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

        if (sourceImage) {
            [_ciContext drawImage:sourceImage inRect:cell.videoPreviewViewBounds fromRect:drawRect];
        }

        [cell.livePreviewView display];
    }
}

此解决方案可让您拥有与您一样多的实时预览希望使用OpenGL渲染从AVCaptureVideoDataOutputSampleBufferDelegate接收的图像缓冲区。

This solution lets you have as many live previews as you want using OpenGL to render the buffer of images received from the AVCaptureVideoDataOutputSampleBufferDelegate.

这是一个github项目,我与两个灵魂一起投掷: https://github.com/JohnnySlagle/Multiple-Camera-Feeds

Here is a github project I threw together with both soultions: https://github.com/JohnnySlagle/Multiple-Camera-Feeds

这篇关于具有多个预览的AVCaptureSession的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆