IOS Swift - 自定义相机覆盖 [英] IOS Swift - Custom camera overlay

查看:293
本文介绍了IOS Swift - 自定义相机覆盖的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

hello我想在我的应用程序中打开一个像这样





我想只在部分中间打开一个摄像机,因此用户只能在矩形部分捕获



我使用的是这个

  import UIKit 
import AVFoundation

类TakeProductPhotoController:UIViewController {

let captureSession = AVCaptureSession()
var previewLayer:AVCaptureVideoPreviewLayer?

//如果我们找到一个设备,我们将它存储在这里供以后使用
var captureDevice:AVCaptureDevice?

override func viewDidLoad(){
super.viewDidLoad()

//加载视图之后进行任何其他设置,通常来自nib。
captureSession.sessionPreset = AVCaptureSessionPresetHigh

let devices = AVCaptureDevice.devices()

//循环遍历此手机上的所有捕获设备
设备in device {
//确保此特定设备支持视频
if(device.hasMediaType(AVMediaTypeVideo)){
//最后检查位置并确认我们有后置摄像头
if(device.position == AVCaptureDevicePosition.Back){
captureDevice = device as? AVCaptureDevice
if captureDevice!= nil {
print(Capture device found)
beginSession()
}
}
}
}

}
func updateDeviceSettings(focusValue:Float,isoValue:Float){
let error:NSErrorPointer = nil

如果let device = captureDevice {
do {
try captureDevice!.lockForConfiguration()

} catch让error1为NSError {
error.memory = error1
}

device.setFocusModeLockedWithLensPosition(focusValue,completionHandler:{(time) - > Void in
//
})

//调整iso以钳制minIso maxIso基于活动格式
let minISO = device.activeFormat.minISO
let maxISO = device.activeFormat.maxISO
let clampingISO = isoValue *(maxISO - minISO)+ minISO

device.setExposureModeCustomWithDuration(AVCaptureExposureDurationCurrent,ISO:clampedISO,completionHandler:{(time) - >无效
//
})

device.unlockForConfiguration()

}
}

func touchPercent(touch:UITouch) - > CGPoint {
//获取屏幕的尺寸
let screenSize = UIScreen.mainScreen()。bounds.size

//创建一个空的CGPoint对象为0 ,0
var touchPer = CGPointZero

//将x和y值设置为点击位置的值除以屏幕的宽度/高度
touchPer。 x = touch.locationInView(self.view).x / screenSize.width
touchPer.y = touch.locationInView(self.view).y / screenSize.height

//返回填充CGPoint
return touchPer
}

func focusTo(value:Float){
let error:NSErrorPointer = nil


如果let device = captureDevice {
do {
try captureDevice!.lockForConfiguration()

} catch让error1为NSError {
error.memory = error1
}

device.setFocusModeLockedWithLensPosition(value,completionHandler:{(time) - >无效
//
})
device.unlockForConfiguration()

}
}

let screenWidth = UIScreen。 mainScreen()。bounds.size.width

override func touchesBegan(touches:Set< UITouch>,withEvent event:UIEvent?){
//如果let touchPer = touches.first {
let touchPer = touchPercent(touches.first!as UITouch)
updateDeviceSettings(Float(touchPer.x),isoValue:Float(touchPer.y))


.touchesBegan(touches,withEvent:event)
}

override func touchesMoved(touches:Set< UITouch> withEvent event:UIEvent?){
//如果let anyTouch = touches.first {
let touchPer = touchPercent(touches.first!as UITouch)
// let touchPercent = anyTouch.locationInView(self.view).x / screenWidth
// focusTo (touchPercent))
updateDeviceSettings(Float(touchPer.x),isoValue:Float(touchPer.y))

}

func configureDevice(){
let error:NSErrorPointer = nil
如果let device = captureDevice {
//device.lockForConfiguration(nil)

do {
try captureDevice!.lockForConfiguration )

} catch让error1为NSError {
error.memory = error1
}

device.focusMode = .Locked
设备。 unlockForConfiguration()
}

}

func beginSession(){
configureDevice()
var err:NSError? = nil

var deviceInput:AVCaptureDeviceInput!
do {
do {
deviceInput = try AVCaptureDeviceInput(device:captureDevice)

} catch let错误为NSError {
err = error
deviceInput = nil
};


captureSession.addInput(deviceInput)

如果err!= nil {
print(error:\(err?.localizedDescription) )
}

previewLayer = AVCaptureVideoPreviewLayer(session:captureSession)

self.view.layer.addSublayer(previewLayer!)
previewLayer?.frame = self.view.layer.frame
captureSession.startRunning()
}
}

在这段代码中,相机正在拍摄整个屏幕。

解决方案

如果您要在自定义 UIView 需要更改 AVCaptureVideoPreviewLayer 。你可以改变它的边界,它的位置,也可以添加掩码。



回到你的问题,捕获层是全屏的,因为你有: p>

  previewLayer?.frame = self.view.layer.frame 

将此行更改为覆盖框

  previewLayer?.frame = self .overLayView.layer.frame 

或者,如果要使用原始值手动定位摄像机图层:

  previewLayer?.frame = CGRectMake(x,y,width,height)



此外,请注意,如果您想在重叠视图中启动摄像机,则需要将子视图添加到该重叠视图



所以这一行:

  self.view.layer.addSublayer(previewLayer!)

将是:

  self.overLayView.layer.addSublayer(previewLayer!)

适合预览图层:

  previewLayer = AVCaptureVideoPreviewLayer(session:captureSession)

var bounds:CGRect
bounds = cameraView.layer.frame;
previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer!.bounds = bounds;
previewLayer!.position = CGPointMake(CGRectGetMidX(bounds),CGRectGetMidY(bounds));

self.view.layer.addSublayer(previewLayer!)


hello I would like to open a camera in my app like this

I want to open a camera only in the middle of the section so user can take a snap only in the rectangle section

the code which I am using is this

import UIKit
import AVFoundation

class TakeProductPhotoController: UIViewController {

    let captureSession = AVCaptureSession()
    var previewLayer : AVCaptureVideoPreviewLayer?

    // If we find a device we'll store it here for later use
    var captureDevice : AVCaptureDevice?

    override func viewDidLoad() {
        super.viewDidLoad()

        // Do any additional setup after loading the view, typically from a nib.
        captureSession.sessionPreset = AVCaptureSessionPresetHigh

        let devices = AVCaptureDevice.devices()

        // Loop through all the capture devices on this phone
        for device in devices {
            // Make sure this particular device supports video
            if (device.hasMediaType(AVMediaTypeVideo)) {
                // Finally check the position and confirm we've got the back camera
                if(device.position == AVCaptureDevicePosition.Back) {
                    captureDevice = device as? AVCaptureDevice
                    if captureDevice != nil {
                        print("Capture device found")
                        beginSession()
                    }
                }
            }
        }

    }
    func updateDeviceSettings(focusValue : Float, isoValue : Float) {
        let error: NSErrorPointer = nil

        if let device = captureDevice {
            do {
                try captureDevice!.lockForConfiguration()

            } catch let error1 as NSError {
                error.memory = error1
            }

                device.setFocusModeLockedWithLensPosition(focusValue, completionHandler: { (time) -> Void in
                    //
                })

                // Adjust the iso to clamp between minIso and maxIso based on the active format
                let minISO = device.activeFormat.minISO
                let maxISO = device.activeFormat.maxISO
                let clampedISO = isoValue * (maxISO - minISO) + minISO

                device.setExposureModeCustomWithDuration(AVCaptureExposureDurationCurrent, ISO: clampedISO, completionHandler: { (time) -> Void in
                    //
                })

                device.unlockForConfiguration()

        }
    }

    func touchPercent(touch : UITouch) -> CGPoint {
        // Get the dimensions of the screen in points
        let screenSize = UIScreen.mainScreen().bounds.size

        // Create an empty CGPoint object set to 0, 0
        var touchPer = CGPointZero

        // Set the x and y values to be the value of the tapped position, divided by the width/height of the screen
        touchPer.x = touch.locationInView(self.view).x / screenSize.width
        touchPer.y = touch.locationInView(self.view).y / screenSize.height

        // Return the populated CGPoint
        return touchPer
    }

    func focusTo(value : Float) {
        let error: NSErrorPointer = nil


        if let device = captureDevice {
            do {
                try captureDevice!.lockForConfiguration()

            } catch let error1 as NSError {
                error.memory = error1
            }

                device.setFocusModeLockedWithLensPosition(value, completionHandler: { (time) -> Void in
                    //
                })
                device.unlockForConfiguration()

        }
    }

    let screenWidth = UIScreen.mainScreen().bounds.size.width

    override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
        //if let touchPer = touches.first {
            let touchPer = touchPercent( touches.first! as UITouch )
         updateDeviceSettings(Float(touchPer.x), isoValue: Float(touchPer.y))


        super.touchesBegan(touches, withEvent:event)
    }

   override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
      // if let anyTouch = touches.first {
           let touchPer = touchPercent( touches.first! as UITouch )
       // let touchPercent = anyTouch.locationInView(self.view).x / screenWidth
  //      focusTo(Float(touchPercent))
    updateDeviceSettings(Float(touchPer.x), isoValue: Float(touchPer.y))

    }

    func configureDevice() {
          let error: NSErrorPointer = nil
        if let device = captureDevice {
            //device.lockForConfiguration(nil)

            do {
                try captureDevice!.lockForConfiguration()

            } catch let error1 as NSError {
                error.memory = error1
            }

            device.focusMode = .Locked
            device.unlockForConfiguration()
        }

    }

    func beginSession() {
        configureDevice()
        var err : NSError? = nil

        var deviceInput: AVCaptureDeviceInput!
        do {
            deviceInput = try AVCaptureDeviceInput(device: captureDevice)

        } catch let error as NSError {
            err = error
            deviceInput = nil
        };


        captureSession.addInput(deviceInput)

        if err != nil {
            print("error: \(err?.localizedDescription)")
        }

        previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

        self.view.layer.addSublayer(previewLayer!)
        previewLayer?.frame = self.view.layer.frame
        captureSession.startRunning()
    }
}

In this code the camera is taking the whole screen.

解决方案

If you want to start camera in a custom UIView, you need to change the AVCaptureVideoPreviewLayer. you can change its bounds, its position, also you can add mask to it.

Coming to your question, the capture layer is taking full screen because you have:

 previewLayer?.frame = self.view.layer.frame

Change this line to that overlay frame

  previewLayer?.frame = self.overLayView.layer.frame 

or, if you want to position the camera layer manually using raw values:

  previewLayer?.frame = CGRectMake(x,y,width,height)

Also , note that, if you want to start the camera in overlay view, you need to add the subview to that overlay view

so this line:

     self.view.layer.addSublayer(previewLayer!)

will be this:

    self.overLayView.layer.addSublayer(previewLayer!)

To stretch the layer/ fit the preview layer:

  previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

        var bounds:CGRect
         bounds=cameraView.layer.frame;
        previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill;
        previewLayer!.bounds=bounds;
        previewLayer!.position=CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));

        self.view.layer.addSublayer(previewLayer!)

这篇关于IOS Swift - 自定义相机覆盖的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆