iOS 相机视频实时预览偏移到拍摄的照片 [英] iOS Camera Video Live Preview Is Offset To Picture Taken

查看:45
本文介绍了iOS 相机视频实时预览偏移到拍摄的照片的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用相机.

相机以实时供稿的形式呈现给用户,当他们点击时,就会创建一个图像并将其传递给用户.

The camera presents as a live feed to the user and when they click an image is created and passed to the user.

问题是图像被设计到最上面的位置,比实时预览显示的要高.

The problem is the image is designed to go to the top most position, which is higher than the live preview is showing.

您知道如何调整摄像机的框架,使实时视频源的顶部与他们要拍摄的照片的顶部相匹配吗?

Do you know how to adjust the frame of the camera so the top of the live video feed matches the top of the picture they are going to take?

我认为这可以做到,但事实并非如此.这是我当前的相机帧代码:

I thought this could would do that, but it doesn't. Here is my current camera frame code:

 //Add the device to the session, get the video feed it produces and add it to the video feed layer
    func initSessionFeed()
    {
_session = AVCaptureSession()
        _session.sessionPreset = AVCaptureSessionPresetPhoto
        updateVideoFeed()

        _videoPreviewLayer = AVCaptureVideoPreviewLayer(session: _session)
        _videoPreviewLayer.frame = CGRectMake(0,0, self.frame.width, self.frame.width) //the live footage IN the video feed view
        _videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
        self.layer.addSublayer(_videoPreviewLayer)//add the footage from the device to the video feed layer
    }

    func initOutputCapture()
    {
        //set up output settings
        _stillImageOutput = AVCaptureStillImageOutput()
        var outputSettings:Dictionary = [AVVideoCodecJPEG:AVVideoCodecKey]
        _stillImageOutput.outputSettings = outputSettings
        _session.addOutput(_stillImageOutput)
        _session.startRunning()
    }

    func configureDevice()
    {
        if _currentDevice != nil
        {
            _currentDevice.lockForConfiguration(nil)
            _currentDevice.focusMode = .Locked
            _currentDevice.unlockForConfiguration()
        }
    }

    func captureImage(callback:(iImage)->Void)
    {
        if(_captureInProcess == true)
        {
            return
        }
        _captureInProcess = true

        var videoConnection:AVCaptureConnection!
        for connection in _stillImageOutput.connections
        {
            for port in (connection as AVCaptureConnection).inputPorts
            {
                if (port as AVCaptureInputPort).mediaType == AVMediaTypeVideo
                {
                    videoConnection = connection as AVCaptureConnection
                    break;
                }

                if videoConnection != nil
                {
                    break;
                }
            }
        }

        if videoConnection  != nil
        {
            _stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection)
            {
                (imageSampleBuffer : CMSampleBuffer!, _) in
                let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
                var pickedImage = UIImage(data: imageDataJpeg, scale: 1)
                UIGraphicsBeginImageContextWithOptions(pickedImage.size, false, pickedImage.scale)
                pickedImage.drawInRect(CGRectMake(0, 0, pickedImage.size.width, pickedImage.size.height))
                pickedImage = UIGraphicsGetImageFromCurrentImageContext() //this returns a normalized image
                if(self._currentDevice == self._frontCamera)
                {
                    var context:CGContextRef = UIGraphicsGetCurrentContext()
                    pickedImage = UIImage(CGImage: pickedImage.CGImage, scale: 1.0, orientation: .UpMirrored)
                    pickedImage.drawInRect(CGRectMake(0, 0, pickedImage.size.width, pickedImage.size.height))
                    pickedImage = UIGraphicsGetImageFromCurrentImageContext()
                }
                UIGraphicsEndImageContext()
                var image:iImage = iImage(uiimage: pickedImage)
                self._captureInProcess = false
                callback(image)
            }
        }
    }

如果我通过提高 y 值来调整 AVCaptureVideoPreviewLayer 的声望,我只会得到一个显示偏移量的黑条.我很好奇为什么视频帧的顶部与我的输出图像不匹配.

If I adjust the fame of the AVCaptureVideoPreviewLayer by say, raising the y value, I just get a black bar showing the offset amount. Im very curious as to why the top most of the video frame is not matching my output image.

我对相机进行了裁剪",使其成为一个完美的正方形,但是为什么实时相机馈送的顶部不是实际顶部(因为图像默认为更高的位置,相机馈送不显示)

I did 'crop' the camera so its a perfect square, but then why is top of the live camera feed not what the actual top is (as the image defaults to a much higher position the camera feed does not show)

更新:

这是我所说内容的前后屏幕截图

Here is the before and after screen shots of what I am talking about

之前:前图这是实时供稿显示的内容

Before: Before image This is what the live feed is showing

之后:后图像 这是用户点击拍照时得到的图像

After: After image This is what the resultant image is when the user clicks take photo

推荐答案

代替

_videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill

你可以试试

_videoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspect

通常,预览和捕获的图像宽度和高度必须匹配.您可能需要对预览或最终图像或两者都进行更多裁剪".

In general, the preview and the captured image width and height will have to match. You might have to do more "cropping" on the preview or on the final image, or both.

这篇关于iOS 相机视频实时预览偏移到拍摄的照片的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆