在Swift中逐个像素地对图像应用视觉效果 [英] Apply visual effect to images pixel by pixel in Swift

查看:682
本文介绍了在Swift中逐个像素地对图像应用视觉效果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一所大学的作业来创建视觉效果,并将它们应用到通过设备摄像头捕获的视频帧。我目前可以获取图像和显示,但不能更改像素颜色值。

I have an university's assignment to create visual effect and apply them to video frames captured through the devices camera. I currently can get the image and display but can't change the pixel color values.

我将示例缓冲区转换为imageRef变量,如果我将其转换为uiimage一切

I transform the sample buffer to the imageRef variable and if I transform it to uiimage everything is alright.

但是现在我想要改变它的颜色的值逐个像素,在这个例子改变为负的颜色(我不得不做更复杂的东西所以我不能使用CIFilters),但当我执行注释的部分它崩溃,由于访问不良。

But now I want to take that imageRef an change its color's values pixel by pixel, in this example change to negative colors (I have to do more complicated stuff so I can't use CIFilters) but when I execute the commented part it crashed due to bad access.

任何帮助是非常感激。

import UIKit
import AVFoundation

import UIKit import AVFoundation

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

  let captureSession = AVCaptureSession()
  var previewLayer : AVCaptureVideoPreviewLayer?

  var captureDevice : AVCaptureDevice?

  @IBOutlet weak var cameraView: UIImageView!

  override func viewDidLoad() {
    super.viewDidLoad()

    captureSession.sessionPreset = AVCaptureSessionPresetMedium

    let devices = AVCaptureDevice.devices()

    for device in devices {
      if device.hasMediaType(AVMediaTypeVideo) && device.position == AVCaptureDevicePosition.Back {
        if let device = device as? AVCaptureDevice {
          captureDevice = device
          beginSession()
          break
        }
      }
    }
  }

  func focusTo(value : Float) {
    if let device = captureDevice {
      if(device.lockForConfiguration(nil)) {
        device.setFocusModeLockedWithLensPosition(value) {
          (time) in
        }
        device.unlockForConfiguration()
      }
    }
  }

  override func touchesBegan(touches: NSSet!, withEvent event: UIEvent!) {
    var touchPercent = Float(touches.anyObject().locationInView(view).x / 320)
    focusTo(touchPercent)
  }

  override func touchesMoved(touches: NSSet!, withEvent event: UIEvent!) {
    var touchPercent = Float(touches.anyObject().locationInView(view).x / 320)
    focusTo(touchPercent)
  }

  func beginSession() {
    configureDevice()

    var error : NSError?
    captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error))

    if error != nil {
      println("error: \(error?.localizedDescription)")
    }

    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)

    previewLayer?.frame = view.layer.frame
    //view.layer.addSublayer(previewLayer)

    let output = AVCaptureVideoDataOutput()
    let cameraQueue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL)
    output.setSampleBufferDelegate(self, queue: cameraQueue)
    output.videoSettings = [kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA]

    captureSession.addOutput(output)
    captureSession.startRunning()
  }

  func configureDevice() {
    if let device = captureDevice {
      device.lockForConfiguration(nil)
      device.focusMode = .Locked
      device.unlockForConfiguration()
    }
  }

  // MARK : - AVCaptureVideoDataOutputSampleBufferDelegate

  func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
    let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
    CVPixelBufferLockBaseAddress(imageBuffer, 0)

    let baseAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0)
    let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
    let width = CVPixelBufferGetWidth(imageBuffer)
    let height = CVPixelBufferGetHeight(imageBuffer)
    let colorSpace = CGColorSpaceCreateDeviceRGB()

    var bitmapInfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.PremultipliedFirst.toRaw())! | CGBitmapInfo.ByteOrder32Little

    let context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, bitmapInfo)
    let imageRef = CGBitmapContextCreateImage(context)

    CVPixelBufferUnlockBaseAddress(imageBuffer, 0)

    let data = CGDataProviderCopyData(CGImageGetDataProvider(imageRef)) as NSData
    let pixels = data.bytes

    var newPixels = UnsafeMutablePointer<UInt8>()

    //for index in stride(from: 0, to: data.length, by: 4) {

      /*newPixels[index] = 255 - pixels[index]
      newPixels[index + 1] = 255 - pixels[index + 1]
      newPixels[index + 2] = 255 - pixels[index + 2]
      newPixels[index + 3] = 255 - pixels[index + 3]*/
    //}

    bitmapInfo = CGImageGetBitmapInfo(imageRef)
    let provider = CGDataProviderCreateWithData(nil, newPixels, UInt(data.length), nil)

    let newImageRef = CGImageCreate(width, height, CGImageGetBitsPerComponent(imageRef), CGImageGetBitsPerPixel(imageRef), bytesPerRow, colorSpace, bitmapInfo, provider, nil, false, kCGRenderingIntentDefault)

    let image = UIImage(CGImage: newImageRef, scale: 1, orientation: .Right)
    dispatch_async(dispatch_get_main_queue()) {
      self.cameraView.image = image
    }
  }
}

感谢进步。

推荐答案

在像素操作循环中有不好的访问,因为newPixels UnsafeMutablePointer用一个内置的RawPointer初始化并指向内存中的0x0000,在我看来它是指向未分配的内存空间,您没有权限存储数据。

You have bad access in the the pixel manipulation loop because the newPixels UnsafeMutablePointer initialized with a built in RawPointer and points to 0x0000 in the memory and in my opinion it is pointing an unallocated memory space where you have no rights to store data.

对于更长的解释和解决方案,我做了一些更改...

For a longer explanation and a "solution" I made some changes...

首先,Swift在OP发布后改变了一点,这行根据rawValue的函数被修改:

First, Swift changed a bit since the OP was posted, this line had to be modified according to the function of rawValue:

    //var bitmapInfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.PremultipliedFirst.toRaw())! | CGBitmapInfo.ByteOrder32Little
    var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue) | CGBitmapInfo.ByteOrder32Little

还需要对指针进行一些更改,因此我发布了所有更改

Also a couple of changes were required for the pointers, so I posted all the changes (I left the original lines in it with comment marks).

    let data = CGDataProviderCopyData(CGImageGetDataProvider(imageRef)) as NSData

    //let pixels = data.bytes
    let pixels = UnsafePointer<UInt8>(data.bytes)

    let imageSize : Int = Int(width) * Int(height) * 4

    //var newPixels = UnsafeMutablePointer<UInt8>()

    var newPixelArray = [UInt8](count: imageSize, repeatedValue: 0)

    for index in stride(from: 0, to: data.length, by: 4) {
        newPixelArray[index] = 255 - pixels[index]
        newPixelArray[index + 1] = 255 - pixels[index + 1]
        newPixelArray[index + 2] = 255 - pixels[index + 2]
        newPixelArray[index + 3] = pixels[index + 3]
    }

    bitmapInfo = CGImageGetBitmapInfo(imageRef)
    //let provider = CGDataProviderCreateWithData(nil, newPixels, UInt(data.length), nil)
    let provider = CGDataProviderCreateWithData(nil, &newPixelArray, UInt(data.length), nil)

一些解释:到UInt8所以,而不是做它改变像素为UnsafePointer。然后我为新的像素做了一个数组,并删除了newPixels指针,并直接使用数组。最后将指向新数组的指针添加到提供者以创建图像。并删除了alpha字节的修改。

Some explanations: All old pixel bytes must be casted to UInt8 so instead of doing it changed the pixels to be an UnsafePointer. Then I made an array for the new pixels and eliminated the newPixels pointer and worked with the array directly. Finally added the pointer to the new array to the provider to create the image. And removed the modification of the alpha bytes.

在这之后,我能够得到一些负面的图像到我的视图中具有非常低的性能,大约每十个图像秒(iPhone 5,通过XCode)。

After this I was able to get some negative images into my view put with a very low performance, around 1 image in every ten seconds (iPhone 5, through XCode). And it takes a lot of time to present the first frame in the imageview.

当我将captureSession.stopRunning()添加到didOutputSampleBuffer函数的开头时,有一些更快的响应然后在处理完成后,再次使用captureSession.startRunning()启动。有了这个,我已经接近1fps。

Had some faster response when I added captureSession.stopRunning() into the beginning of the didOutputSampleBuffer function then after the processing was completed started again with captureSession.startRunning(). With this I had nearly 1fps.

感谢中间挑战!

这篇关于在Swift中逐个像素地对图像应用视觉效果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆