iOS通过UIImages逐像素比较来检索不同的像素 [英] iOS retrieve different pixels in pixel by pixel comparison of UIImages
问题描述
我正在尝试对两个UIImages进行逐像素比较,我需要检索不同的像素。使用此从UIImage生成散列我找到了一种方法来生成散列UIImage的。有没有办法比较两个哈希值并检索不同的像素?
如果您想实际检索差异,哈希不能帮助你。您可以使用散列来检测可能存在的差异,但为了获得实际差异,您必须使用其他技术。
例如,要创建一个 UIImage
,它包含两幅图像之间的差异,请参阅此接受的答案 Cory Kilgor介绍了 CGContextSetBlendMode
与 kCGBlendModeDifference
的混合模式的使用: +(UIImage *)differenceOfImage:(UIImage *)top withImage:(UIImage *)bottom {
CGImageRef topRef = [top CGImage];
CGImageRef bottomRef = [底部CGImage];
//维度
CGRect bottomFrame = CGRectMake(0,0,CGImageGetWidth(bottomRef),CGImageGetHeight(bottomRef));
CGRect topFrame = CGRectMake(0,0,CGImageGetWidth(topRef),CGImageGetHeight(topRef));
CGRect renderFrame = CGRectIntegral(CGRectUnion(bottomFrame,topFrame));
//创建上下文
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(colorSpace == NULL){
printf(Error allocating color space.\\\
);
返回NULL;
}
CGContextRef context = CGBitmapContextCreate(NULL,
renderFrame.size.width,
renderFrame.size.height,
8,
renderFrame.size.width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if(context == NULL){
printf(Context not created!\\\
);
返回NULL;
}
//绘制图片
CGContextSetBlendMode(context,kCGBlendModeNormal);
CGContextDrawImage(context,CGRectOffset(bottomFrame,-renderFrame.origin.x,-renderFrame.origin.y),bottomRef);
CGContextSetBlendMode(context,kCGBlendModeDifference);
CGContextDrawImage(context,CGRectOffset(topFrame,-renderFrame.origin.x,-renderFrame.origin.y),topRef);
//从上下文创建图像
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return image;
}
I am trying to do a pixel by pixel comparison of two UIImages and I need to retrieve the pixels that are different. Using this Generate hash from UIImage I found a way to generate a hash for a UIImage. Is there a way to compare the two hashes and retrieve the different pixels?
If you want to actually retrieve the difference, the hash cannot help you. You can use the hash to detect the likely presence of differences, but to get the actual differences, you have to use other techniques.
For example, to create a UIImage
that consists of the difference between two images, see this accepted answer in which Cory Kilgor's illustrates the use of CGContextSetBlendMode
with a blend mode of kCGBlendModeDifference
:
+ (UIImage *) differenceOfImage:(UIImage *)top withImage:(UIImage *)bottom {
CGImageRef topRef = [top CGImage];
CGImageRef bottomRef = [bottom CGImage];
// Dimensions
CGRect bottomFrame = CGRectMake(0, 0, CGImageGetWidth(bottomRef), CGImageGetHeight(bottomRef));
CGRect topFrame = CGRectMake(0, 0, CGImageGetWidth(topRef), CGImageGetHeight(topRef));
CGRect renderFrame = CGRectIntegral(CGRectUnion(bottomFrame, topFrame));
// Create context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(colorSpace == NULL) {
printf("Error allocating color space.\n");
return NULL;
}
CGContextRef context = CGBitmapContextCreate(NULL,
renderFrame.size.width,
renderFrame.size.height,
8,
renderFrame.size.width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if(context == NULL) {
printf("Context not created!\n");
return NULL;
}
// Draw images
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextDrawImage(context, CGRectOffset(bottomFrame, -renderFrame.origin.x, -renderFrame.origin.y), bottomRef);
CGContextSetBlendMode(context, kCGBlendModeDifference);
CGContextDrawImage(context, CGRectOffset(topFrame, -renderFrame.origin.x, -renderFrame.origin.y), topRef);
// Create image from context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return image;
}
这篇关于iOS通过UIImages逐像素比较来检索不同的像素的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!