如何获得在iPhone上的图像的像素的RGB值 [英] How to get the RGB values for a pixel on an image on the iphone
问题描述
我写一个iPhone应用程序,并需要从根本上实现一些等同于在Photoshop中,吸管工具,您可以在图像上触摸点,捕捉的RGB值有问题的像素来确定和匹配其颜色。获取的UIImage是容易的部分,但有一种方法来的UIImage数据转换成位图重新presentation中,我可以提取这些信息对于给定的像素?一个工作code样品将是最AP preciated,请注意,我并不关心alpha值。
更详细一点...
我今晚早些时候发布了巩固和小除了什么被此页面上说了 - 可在这篇文章的底部找到。我编辑在这一点上的帖子,然而,寄我的建议是(至少对我的要求,其中包括修改像素数据)一个更好的方法,因为它提供了可写数据(而,据我所知,所提供的方法由previous职位,在这个岗位的底部提供了一个只读的引用数据)。
方法1:可写像素信息
-
我定义的常量
的#define RGBA 4
#定义RGBA_8_BIT 8 -
在我的UIImage子类,我声明实例变量:
为size_t bytesPerRow;
为size_t BYTECOUNT;
为size_t pixelCount;CGContextRef背景;
CGColorSpaceRef色彩;UINT8 * pixelByteData;
//一个指向RGBA数组在内存字节
RPVW_RGBAPixel * PixelData取出; -
像素结构(与阿尔法在这个版本)
typedef结构RGBAPixel {
字节红色;
字节绿色;
字节蓝色;
字节阿尔法;
} RGBAPixel; -
位图功能(返回pre计算的RGBA;由A分RGB获得未修改RGB):
- (RGBAPixel *){位图
的NSLog(@的UIImage的位图返回重新presentation。);
每个红,绿,蓝,和α// 8位。
[个体经营setBytesPerRow:self.size.width * RGBA]。
[个体经营setByteCount:bytesPerRow * self.size.height]。
[个体经营setPixelCount:self.size.width * self.size.height]。 //创建RGB色彩空间
[个体经营setColorSpace:CGColorSpaceCreateDeviceRGB()]; 如果(!色彩)
{
的NSLog(@错误分配色彩空间。);
回零;
} [个体经营setPixelData:的malloc(BYTECOUNT)]; 如果(!PixelData取出)
{
的NSLog(@错误分配内存位图释放色彩空间。);
CGColorSpaceRelease(色彩); 回零;
} //创建位图上下文。
// pre-相乘RGBA,每个组件8位。
//源图像格式将被转换成由CGBitmapContextCreate此处指定的格式。
[个体经营setContext:CGBitmapContextCreate(
(无效*)PixelData取出,
self.size.width,
self.size.height,
RGBA_8_BIT,
bytesPerRow,
色彩,
kCGImageAlpha premultipliedLast
)]; //确保我们有我们的情况下
如果(!上下文){
免费(PixelData取出);
的NSLog(@语境下创造!);
} //图像绘制位图上下文。
//分配用于呈现然后将包含原始图像PixelData取出在指定颜色空间的上下文的存储器。
的CGRect RECT = {{0,0},{self.size.width,self.size.height}}; CGContextDrawImage(背景下,矩形,self.CGImage); //现在我们可以得到一个指向与位图上下文关联的图像PixelData取出。
PixelData取出=(RGBAPixel *)CGBitmapContextGetData(上下文); 返回PixelData取出;
}
只读数据(previous信息) - 方法2:
步骤1.我宣布为字节类型:
的typedef unsigned char型字节;
步骤2.我宣布一个结构来对应一个像素:
typedef结构RGBPixel {字节红色;
字节绿色;
字节蓝色;} RGBPixel;
步骤3.我子类的UIImageView,并宣布(对应合成属性):
//参考石英CGImage接收器(个体经营)CFDataRef位图数据;//缓冲区持有接收机举行石英CGImage复制原始像素数据(个体经营)UINT8 * pixelByteData;//指针到第一像素单元以阵列RGBPixel * PixelData取出;
步骤4.子类code,我把在一个名为位图法(返回位图像素数据):
//从接收器的CGImage获取位图数据(见的UIImage文档)[个体经营setBitmapData:CGDataProviderCopyData(CGImageGetDataProvider([个体经营CGImage]))];//创建一个缓冲器来存储位图数据(未初始化存储器只要数据)[个体经营setPixelBitData:的malloc(CFDataGetLength(位图数据))];//影像数据复制到分配的缓冲区CFDataGetBytes(位图数据,CFRangeMake(0,CFDataGetLength(位图数据)),pixelByteData);//投射的指针pixelByteData的第一元件//从本质上讲就是我们正在做的是做了不同的划分byteData的单位第二指针 - 而不是将每个单元为1个字节,我们将每个单元划分为3个字节(1个像素)。PixelData取出=(RGBPixel *)pixelByteData;//现在您可以通过索引来访问像素:PixelData取出[指数]的NSLog(@像素数据中的一个红色(%I),绿(%I),蓝色(%I)。,PixelData取出[0] .red,PixelData取出[0]。绿色,PixelData取出[0]的.blue);//你可以乘以行*列确定所需的索引。返回PixelData取出;
步骤5.我做了一个访问方式:
- (RGBPixel *)pixelDataForRow:(INT)行
列:(INT)和列{ //返回一指针的像素数据 返回&安培; PixelData取出[*行列];}
I am writing an iPhone application and need to essentially implement something equivalent to the 'eyedropper' tool in photoshop, where you can touch a point on the image and capture the RGB values for the pixel in question to determine and match its color. Getting the UIImage is the easy part, but is there a way to convert the UIImage data into a bitmap representation in which I could extract this information for a given pixel? A working code sample would be most appreciated, and note that I am not concerned with the alpha value.
A little more detail...
I posted earlier this evening with a consolidation and small addition to what had been said on this page - that can be found at the bottom of this post. I am editing the post at this point, however, to post what I propose is (at least for my requirements, which include modifying pixel data) a better method, as it provides writable data (whereas, as I understand it, the method provided by previous posts and at the bottom of this post provides a read-only reference to data).
Method 1: Writable Pixel Information
I defined constants
#define RGBA 4 #define RGBA_8_BIT 8
In my UIImage subclass I declared instance variables:
size_t bytesPerRow; size_t byteCount; size_t pixelCount; CGContextRef context; CGColorSpaceRef colorSpace; UInt8 *pixelByteData; // A pointer to an array of RGBA bytes in memory RPVW_RGBAPixel *pixelData;
The pixel struct (with alpha in this version)
typedef struct RGBAPixel { byte red; byte green; byte blue; byte alpha; } RGBAPixel;
Bitmap function (returns pre-calculated RGBA; divide RGB by A to get unmodified RGB):
-(RGBAPixel*) bitmap { NSLog( @"Returning bitmap representation of UIImage." ); // 8 bits each of red, green, blue, and alpha. [self setBytesPerRow:self.size.width * RGBA]; [self setByteCount:bytesPerRow * self.size.height]; [self setPixelCount:self.size.width * self.size.height]; // Create RGB color space [self setColorSpace:CGColorSpaceCreateDeviceRGB()]; if (!colorSpace) { NSLog(@"Error allocating color space."); return nil; } [self setPixelData:malloc(byteCount)]; if (!pixelData) { NSLog(@"Error allocating bitmap memory. Releasing color space."); CGColorSpaceRelease(colorSpace); return nil; } // Create the bitmap context. // Pre-multiplied RGBA, 8-bits per component. // The source image format will be converted to the format specified here by CGBitmapContextCreate. [self setContext:CGBitmapContextCreate( (void*)pixelData, self.size.width, self.size.height, RGBA_8_BIT, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast )]; // Make sure we have our context if (!context) { free(pixelData); NSLog(@"Context not created!"); } // Draw the image to the bitmap context. // The memory allocated for the context for rendering will then contain the raw image pixelData in the specified color space. CGRect rect = { { 0 , 0 }, { self.size.width, self.size.height } }; CGContextDrawImage( context, rect, self.CGImage ); // Now we can get a pointer to the image pixelData associated with the bitmap context. pixelData = (RGBAPixel*) CGBitmapContextGetData(context); return pixelData; }
Read-Only Data (Previous information) - method 2:
Step 1. I declared a type for byte:
typedef unsigned char byte;
Step 2. I declared a struct to correspond to a pixel:
typedef struct RGBPixel {
byte red;
byte green;
byte blue;
} RGBPixel;
Step 3. I subclassed UIImageView and declared (with corresponding synthesized properties):
// Reference to Quartz CGImage for receiver (self)
CFDataRef bitmapData;
// Buffer holding raw pixel data copied from Quartz CGImage held in receiver (self)
UInt8* pixelByteData;
// A pointer to the first pixel element in an array
RGBPixel* pixelData;
Step 4. Subclass code I put in a method named bitmap (to return the bitmap pixel data):
// Get the bitmap data from the receiver's CGImage (see UIImage docs)
[self setBitmapData: CGDataProviderCopyData( CGImageGetDataProvider( [self CGImage] ) )];
// Create a buffer to store bitmap data (unitialized memory as long as the data)
[self setPixelBitData: malloc( CFDataGetLength( bitmapData ) )];
// Copy image data into allocated buffer
CFDataGetBytes( bitmapData, CFRangeMake( 0, CFDataGetLength( bitmapData ) ), pixelByteData );
// Cast a pointer to the first element of pixelByteData
// Essentially what we're doing is making a second pointer that divides the byteData's units differently - instead of dividing each unit as 1 byte we will divide each unit as 3 bytes (1 pixel).
pixelData = (RGBPixel*) pixelByteData;
// Now you can access pixels by index: pixelData[ index ]
NSLog( @"Pixel data one red (%i), green (%i), blue (%i).", pixelData[0].red, pixelData[0].green, pixelData[0].blue );
// You can determine the desired index by multiplying row * column.
return pixelData;
Step 5. I made an accessor method:
- (RGBPixel*) pixelDataForRow: (int) row
column: (int) column {
// Return a pointer to the pixel data
return &pixelData[ row * column ];
}
这篇关于如何获得在iPhone上的图像的像素的RGB值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!