CIImage范围是像素还是点? [英] CIImage extent in pixels or points?

查看:106
本文介绍了CIImage范围是像素还是点?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用CIImage,虽然我知道它不是线性图像,但它确实包含一些数据.

我的问题是CIImage的range属性是否返回像素或点?根据文档,该文档很少说,它是工作空间坐标.这是否意味着无法从CIImage获取像素/点,我必须转换为UIImage才能使用.size属性获取点?

我有一个具有一定大小的UIImage,并且当我使用UIImage创建CIImage时,范围以磅为单位显示.但是,如果我通过缩放它的CIFilter运行CIImage,有时会得到以像素值返回的范围.

解决方案

我会尽力而为.

如果源是 UIImage ,则其 size 与范围相同.但是请注意,这不是UIImageView(大小以磅为单位).而我们只是在谈论来源图片.

通过CIFilter运行某些内容意味着您正在处理某些内容.如果您要做的只是操作 color ,则其大小/范围不应更改(与创建自己的 CIColorKernel 相同-逐像素工作)./p>

但是,根据CIFilter,您可能会更改大小/范围.某些滤镜会创建蒙版或图块.这些实际上可能具有无限的范围!其他(模糊是一个很好的例子)对周围像素进行采样,因此它们的范围实际上会增加,因为它们对超出源图像大小的像素"进行了采样.(自定义方式为 CIWarpKernel .)

是的,很多.将此作为底线:

  • 过滤器在做什么?是否需要简单地检查像素的RGB并执行某些操作?然后,UIImage大小应为输出CIImage范围.
  • 滤镜会产生一些取决于像素周围像素的东西吗?然后,输出CIImage范围会稍大.多少取决于过滤器.
  • 有些过滤器产生的内容与输入无关.其中大多数可能没有真正的范围,因为它们可能是无限的.

点是UIKit和CoreGraphics始终使用的功能.像素?在某些时候,CoreImage确实可以,但是它是低级的(除非您想编写自己的内核),您根本不在乎.范围通常可以-但请注意以上几点-等于UIImage大小.

编辑

许多图像(尤其是RAW图像)的尺寸可能太大,以至于影响性能.我有一个UIImage扩展,可以将图像调整为特定的矩形大小,以帮助维持一致的CI性能.

 扩展UIImage {公共功能resizeToBoundingSquare(_ boundingSquareSideLength:CGFloat)->UIImage {让imgScale = self.size.width>self.size.height吗?boundingSquareSideLength/self.size.width:boundingSquareSideLength/self.size.height让newWidth = self.size.width * imgScale让newHeight = self.size.height * imgScale让newSize = CGSize(width:newWidth,height:newHeight)UIGraphicsBeginImageContext(newSize)self.draw(in:CGRect(x:0,y:0,width:newWidth,height:newHeight))让resizeImage = UIGraphicsGetImageFromCurrentImageContext()UIGraphicsEndImageContext();返回resizedImage!}} 

用法:

  image = image.resizeToBoundingSquare(640) 

在此示例中,图像尺寸3200x2000将减小为640x400.否则图像尺寸或320x200将被放大到640x400.我在渲染图像之前和创建要在CIFilter中使用的CIImage之前对图像执行此操作.

I'm working with a CIImage, and while I understand it's not a linear image, it does hold some data.

My question is whether or not a CIImage's extent property returns pixels or points? According to the documentation, which says very little, it's working space coordinates. Does this mean there's no way to get the pixels / points from a CIImage and I must convert to a UIImage to use the .size property to get the points?

I have a UIImage with a certain size, and when I create a CIImage using the UIImage, the extent is shown in points. But if I run a CIImage through a CIFilter that scales it, I sometimes get the extent returned in pixel values.

解决方案

I'll answer the best I can.

If your source is a UIImage, its size will be the same as the extent. But please, this isn't a UIImageView (which the size is in points). And we're just talking about the source image.

Running something through a CIFilter means you are manipulating things. If all you are doing is manipulating color, its size/extent shouldn't change (the same as creating your own CIColorKernel - it works pixel-by-pixel).

But, depending on the CIFilter, you may well be changing the size/extent. Certain filters create a mask, or tile. These may actually have an extent that is infinite! Others (blurs are a great example) sample surrounding pixels so their extent actually increases because they sample "pixels" beyond the source image's size. (Custom-wise these are a CIWarpKernel.)

Yes, quite a bit. Taking this to a bottom line:

  • What is the filter doing? Does it need to simply check a pixel's RGB and do something? Then the UIImage size should be the output CIImage extent.
  • Does the filter produce something that depends on the pixel's surrounding pixels? Then the output CIImage extent is slightly larger. How much may depend on the filter.
  • There are filters that produce something with no regard to an input. Most of these may have no true extent, as they can be infinite.

Points are what UIKit and CoreGraphics always work with. Pixels? At some point CoreImage does, but it's low-level to a point (unless you want to write your own kernel) you shouldn't care. Extents can usually - but keep in mind the above - be equated to a UIImage size.

EDIT

Many images (particularly RAW ones) can have so large a size as to affect performance. I have an extension for UIImage that resizes an image to a specific rectangle to help maintain consistent CI performance.

extension UIImage {
    public func resizeToBoundingSquare(_ boundingSquareSideLength : CGFloat) -> UIImage {
        let imgScale = self.size.width > self.size.height ? boundingSquareSideLength / self.size.width : boundingSquareSideLength / self.size.height
        let newWidth = self.size.width * imgScale
        let newHeight = self.size.height * imgScale
        let newSize = CGSize(width: newWidth, height: newHeight)
        UIGraphicsBeginImageContext(newSize)
        self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
        let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext();
        return resizedImage!
    }
}

Usage:

image = image.resizeToBoundingSquare(640)

In this example, an image size of 3200x2000 would be reduced to 640x400. Or an image size or 320x200 would be enlarged to 640x400. I do this to an image before rendering it and before creating a CIImage to use in a CIFilter.

这篇关于CIImage范围是像素还是点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆