在Swift中从iOS相机供稿获取像素值的最有效/实时方式 [英] Most efficient/realtime way to get pixel values from iOS camera feed in Swift

查看:155
本文介绍了在Swift中从iOS相机供稿获取像素值的最有效/实时方式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

关于类似问题,这里有一些讨论.就像一样,但它们似乎已经过时了,所以我想我想问一下.

There are some discussions on here about similar questions. Like this, but they seem quite outdated, so I thought I'd ask here.

我想从Swift 2.0的摄像头中获取接近实时的RGB像素值,甚至更好的是全图像RGB直方图.我希望它尽可能快且最新(理想情况下约为30 fps或更高)

I want to get near-realtime RGB pixel values, or even better, a full image RGB histogram from a camera feed in swift 2.0. I want this to be as quick and up to date as possible (~30 fps or higher ideally)

我可以直接从AVCaptureVideoPreviewLayer获取此图像吗?还是需要捕获每个帧(如果处理需要花费大量时间,我认为是异步的),然后从jpeg/png渲染中提取像素值?

Can I get this directly from a AVCaptureVideoPreviewLayer or do I need to capture each frame (async, I assume, if the process takes significant time) then extract pixel values from the jpeg/png render?

一些示例代码,取自 jquave ,但已针对Swift 2.0进行了修改

Some example code, taken from jquave but modified for swift 2.0

import UIKit
import AVFoundation

class ViewController: UIViewController {

let captureSession = AVCaptureSession()
var previewLayer : AVCaptureVideoPreviewLayer?

var captureDevice : AVCaptureDevice?

override func viewDidLoad() {
    super.viewDidLoad()

    // Do any additional setup after loading the view, typically from a nib.
    captureSession.sessionPreset = AVCaptureSessionPresetHigh

    let devices = AVCaptureDevice.devices()

    // Loop through all the capture devices on this phone
    for device in devices {
        // Make sure this particular device supports video
        if (device.hasMediaType(AVMediaTypeVideo)) {
            // Finally check the position and confirm we've got the back camera
            if(device.position == AVCaptureDevicePosition.Back) {
                captureDevice = device as? AVCaptureDevice
                if captureDevice != nil {
                    print("Capture device found")
                    beginSession()
                }
            }
        }
    }
}

func focusTo(value : Float) {
    if let device = captureDevice {
        do {
            try device.lockForConfiguration()
                device.setFocusModeLockedWithLensPosition(value, completionHandler: { (time) -> Void in
                })
            device.unlockForConfiguration()
        } catch {
            //error message
            print("Can't change focus of capture device")
        }
    }
}

func configureDevice() {
    if let device = captureDevice {
        do {
            try device.lockForConfiguration()
                device.focusMode = .Locked
                device.unlockForConfiguration()
        } catch {
            //error message etc.
            print("Capture device not configurable")
        }
    }

}

func beginSession() {

    configureDevice()
    do {
        //try captureSession.addInput(input: captureDevice)
        try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
        updateDeviceSettings(0.0, isoValue: 0.0)
    } catch {
        //error message etc.
        print("Capture device not initialisable")
    }
    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
    self.view.layer.addSublayer(previewLayer!)
    previewLayer?.frame = self.view.layer.frame
    captureSession.startRunning()        
}

func updateDeviceSettings(focusValue : Float, isoValue : Float) {
    if let device = captureDevice {
        do {
            try device.lockForConfiguration()
            device.setFocusModeLockedWithLensPosition(focusValue, completionHandler: { (time) -> Void in
                //
            })

            // Adjust the iso to clamp between minIso and maxIso based on the active format
            let minISO = device.activeFormat.minISO
            let maxISO = device.activeFormat.maxISO
            let clampedISO = isoValue * (maxISO - minISO) + minISO

            device.setExposureModeCustomWithDuration(AVCaptureExposureDurationCurrent, ISO: clampedISO, completionHandler: { (time) -> Void in
                //
            })

            device.unlockForConfiguration()
        } catch {
            //error message etc.
            print("Can't update device settings")
        }

    }
}
}

推荐答案

您不想要AVCaptureVideoPreviewLayer-如果您想显示视频,这就是您想要的.相反,您需要一个不同的输出:AVCaptureVideoDataOutput:

You don't want an AVCaptureVideoPreviewLayer - that's what you want if you want to display the video. Instead, you want a different output: AVCaptureVideoDataOutput:

https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureVideoDataOutput_Class/index.html#//apple_ref/occ/cl/AVCaptureVideoDataOutput

这使您可以直接访问样本缓冲区流,然后可以进入像素空间.

This gives you direct access to the stream of sample buffers, which you can then get into pixel-space.

仅需注意:我不知道当前设备的吞吐量是多少,但是我无法从iPhone 4S获得最高质量的实时流,因为GPU--CPU管道太慢了.

Just a note: I don't know what the throughput on current devices is, but I was unable to get a live stream at the highest quality from the iPhone 4S because the GPU<-->CPU pipeline was too slow.

这篇关于在Swift中从iOS相机供稿获取像素值的最有效/实时方式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆