在iOS中的AVCaptureDevice输出上设置GrayScale [英] Set GrayScale on Output of AVCaptureDevice in iOS
问题描述
我想在我的应用中实现自定义相机。所以,我正在使用 AVCaptureDevice
创建这个相机。
I want to implement custom camera into my app. So, I am creating this camera using AVCaptureDevice
.
现在我想只在我的自定义相机中显示灰度输出。所以我想尝试使用 setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:
和 AVCaptureWhiteBalanceGains
。我正在使用 AVCamManual:扩展AVCam以使用手动捕获。
Now I want to show only Gray Output into my custom camera. So I am trying to getting this using setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:
and AVCaptureWhiteBalanceGains
. I am using AVCamManual: Extending AVCam to Use Manual Capture for this.
- (void)setWhiteBalanceGains:(AVCaptureWhiteBalanceGains)gains
{
NSError *error = nil;
if ( [videoDevice lockForConfiguration:&error] ) {
AVCaptureWhiteBalanceGains normalizedGains = [self normalizedGains:gains]; // Conversion can yield out-of-bound values, cap to limits
[videoDevice setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:normalizedGains completionHandler:nil];
[videoDevice unlockForConfiguration];
}
else {
NSLog( @"Could not lock device for configuration: %@", error );
}
}
但为此,我必须通过 RGB增益值介于1到4之间。所以我创建了这个方法来检查MAX和MIN值。
But for that, I must have to pass RGB gain values between 1 to 4. So I am creating this method for checking MAX and MIN values.
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = MAX( 1.0, g.redGain );
g.greenGain = MAX( 1.0, g.greenGain );
g.blueGain = MAX( 1.0, g.blueGain );
g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain );
g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );
return g;
}
此外,我试图获得不同的效果,例如传递RGB增益静态值。
Also I am trying to get different effects like passing RGB gain static values.
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = 3;
g.greenGain = 2;
g.blueGain = 1;
return g;
}
现在,我想设置此灰度(公式:Pixel = 0.30078125f) * R + 0.5859375f * G + 0.11328125f * B)在我的自定义相机上。我试过这个公式。
Now, I want to set this gray scale (Formula: Pixel = 0.30078125f * R + 0.5859375f * G + 0.11328125f * B) on my custom camera. I have tried this for this formula.
- (AVCaptureWhiteBalanceGains)normalizedGains:(AVCaptureWhiteBalanceGains) gains
{
AVCaptureWhiteBalanceGains g = gains;
g.redGain = g.redGain * 0.30078125;
g.greenGain = g.greenGain * 0.5859375;
g.blueGain = g.blueGain * 0.11328125;
float grayScale = g.redGain + g.greenGain + g.blueGain;
g.redGain = MAX( 1.0, grayScale );
g.greenGain = MAX( 1.0, grayScale );
g.blueGain = MAX( 1.0, grayScale );
g.redGain = MIN( videoDevice.maxWhiteBalanceGain, g.redGain );
g.greenGain = MIN( videoDevice.maxWhiteBalanceGain, g.greenGain);
g.blueGain = MIN( videoDevice.maxWhiteBalanceGain, g.blueGain );
return g;
}
所以如何在1到4之间传递此值 ..?
有什么方法或比例来比较这些东西..?
Is there any way or scale to compare this things..?
任何我们将不胜感激。
推荐答案
CoreImage
提供了大量过滤器使用GPU调整图像,可以通过相机输入或视频文件有效地使用视频数据。
CoreImage
provides a host of filters for adjusting images using the GPU, and can be used efficiently with video data, either from a camera feed, or a video file.
有一篇关于 objc.io 显示了如何执行此操作。这些例子在Objective-C中,但解释应该足够清楚。
There is an article on objc.io showing how to do this. The examples are in Objective-C but the explanation should clear enough to follow.
基本步骤如下:
- 创建
EAGLContext
,配置为使用OpenGLES2。 - 创建
GLKView
以显示渲染输出,使用EAGLContext
。 - 使用相同的
EAGLContext
创建CIContext
。 - 使用
CIColorMonochrome
CIFilter 。 apple.com/library/mac/documentation/GraphicsImaging/Reference/CoreImageFilterReference/#//apple_ref/doc/filter/ci/CIColorMonochromerel =nofollow> CoreImage过滤器。 - 使用
AVCaptureVideoDataOutput
创建AVCaptureSession
。 - 在
AVCaptureVideoDataOutputDelegate
方法中,将CMSampleBuffer
转换为CIImage
。将CIFilter
应用于图像。将过滤后的图像绘制到CIImageContext
。
- Create an
EAGLContext
, configured to use OpenGLES2. - Create a
GLKView
to display the rendered output, using theEAGLContext
. - Create a
CIContext
, using the sameEAGLContext
. - Create a
CIFilter
using aCIColorMonochrome
CoreImage filter. - Create an
AVCaptureSession
with anAVCaptureVideoDataOutput
. - In the
AVCaptureVideoDataOutputDelegate
method, convert theCMSampleBuffer
to aCIImage
. Apply theCIFilter
to the image. Draw the filtered image to theCIImageContext
.
此管道确保视频像素缓冲区保留在GPU上(从相机到显示器),并避免将数据移动到CPU,保持实时性能。
This pipeline ensures that the video pixel buffers stay on the GPU (from camera to display), and avoids moving data to the CPU, to maintain realtime performance.
要保存过滤后的视频,请实现 AVAssetWriter
,并将示例缓冲区附加到相同的 AVCaptureVideoDataOutputDelegate
过滤完成的地方。
To save the filtered video, implement an AVAssetWriter
, and append the sample buffer in the same AVCaptureVideoDataOutputDelegate
where the filtering is done.
以下是Swift中的一个示例。
Here is an example in Swift.
import UIKit
import GLKit
import AVFoundation
private let rotationTransform = CGAffineTransformMakeRotation(CGFloat(-M_PI * 0.5))
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
private var context: CIContext!
private var targetRect: CGRect!
private var session: AVCaptureSession!
private var filter: CIFilter!
@IBOutlet var glView: GLKView!
override func prefersStatusBarHidden() -> Bool {
return true
}
override func viewDidAppear(animated: Bool) {
super.viewDidAppear(animated)
let whiteColor = CIColor(
red: 1.0,
green: 1.0,
blue: 1.0
)
filter = CIFilter(
name: "CIColorMonochrome",
withInputParameters: [
"inputColor" : whiteColor,
"inputIntensity" : 1.0
]
)
// GL context
let glContext = EAGLContext(
API: .OpenGLES2
)
glView.context = glContext
glView.enableSetNeedsDisplay = false
context = CIContext(
EAGLContext: glContext,
options: [
kCIContextOutputColorSpace: NSNull(),
kCIContextWorkingColorSpace: NSNull(),
]
)
let screenSize = UIScreen.mainScreen().bounds.size
let screenScale = UIScreen.mainScreen().scale
targetRect = CGRect(
x: 0,
y: 0,
width: screenSize.width * screenScale,
height: screenSize.height * screenScale
)
// Setup capture session.
let cameraDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let videoInput = try? AVCaptureDeviceInput(
device: cameraDevice
)
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: dispatch_get_main_queue())
session = AVCaptureSession()
session.beginConfiguration()
session.addInput(videoInput)
session.addOutput(videoOutput)
session.commitConfiguration()
session.startRunning()
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let originalImage = CIImage(
CVPixelBuffer: pixelBuffer,
options: [
kCIImageColorSpace: NSNull()
]
)
let rotatedImage = originalImage.imageByApplyingTransform(rotationTransform)
filter.setValue(rotatedImage, forKey: kCIInputImageKey)
guard let filteredImage = filter.outputImage else {
return
}
context.drawImage(filteredImage, inRect: targetRect, fromRect: filteredImage.extent)
glView.display()
}
func captureOutput(captureOutput: AVCaptureOutput!, didDropSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
let seconds = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
print("dropped sample buffer: \(seconds)")
}
}
这篇关于在iOS中的AVCaptureDevice输出上设置GrayScale的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!