从一个1D像素阵列的NSImage? [英] NSImage from a 1D pixel array?

查看:160
本文介绍了从一个1D像素阵列的NSImage?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我的程序中有一个大的1D动态数组,表示磁盘上的FITS图像,即它保存图像的所有像素值。数组的类型为 double 。目前,我只关心单色图像。



由于Cocoa不直接支持FITS格式,我正在使用CFITSIO库读取图像。这个工作 - 我可以操作数组,我希望并使用库将结果保存到磁盘。



但是,现在我想显示图像。我认为这是NSImage或NSView可以做的。但是类引用似乎没有列出一个可以接受C数组并最终返回NSImage对象的方法。我发现的最接近的是 -initWithData:(NSData *)。但我不是100%肯定如果这是我需要的。



我在这里咆哮错了树?任何可以处理此类的类或方法的指针将是



编辑:



这是更新的代码。注意,我将每个像素设置为0xFFFF。这只会导致一个灰色的图像。这是当然只是一个测试。当加载实际的FITS文件时,我用 imageArray [i * width + j] 替换0xFFFF。这完美地工作在8位(当然,我划分每个像素值256以表示它在8位)。

  NSBitmapImageRep * greyRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil 
pixelsWide:width
pixelsHigh:height
bitsPerSample:16
samplesPerPixel:1
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSCalibratedWhiteColorSpace
bytesPerRow:0
bitsPerPixel:16];

NSInteger rowBytes = [greyRep bytesPerRow];
unsigned short * pix =(unsigned short *)[greyRep bitmapData];
NSLog(@Row Bytes:%d,rowBytes);

if(temp.bitPix == 16)// 16位图像
{
for(i = 0; i {
for(j = 0; j< width; j ++)
{
pix [i * rowBytes + j] = 0xFFFF;
}
}
}



我还尝试直接使用Quartz2D 。这产生一个适当的图像,即使在16位。但奇怪的是,数据数组取0xFF为白色,而不是0xFFFF。所以我仍然必须把所有的一切分开0xFF - 在过程中丢失的数据。 Quartz2D代码:

  short * grey =(short *)malloc(width * height * sizeof(short)); 
for(int i = 0; i {
gray [i] = imageArray [i]
}

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef bitmapContext = CGBitmapContextCreate(gray,width,height,16,width * 2,colorSpace,kCGImageAlphaNone);
CFRelease(colorSpace);

CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);

NSImage * greyImage = [[NSImage alloc] initWithCGImage:cgImage size:NSMakeSize(width,height)];

有任何建议吗?

解决方案

initWithData 仅适用于系统已知的图像类型。对于未知类型和原始像素数据,您需要自己构建图像表示。你可以通过Core Graphics如Kirby链接到的答案中所建议的那样做。或者,您可以通过创建和添加 NSBitmapImageRep 来使用 NSImage



确切的细节将取决于像素数据的格式,但这里是一个灰度图像的过程示例,其中源数据( samples 数组)在[0,1]范围内表示为双精度:

  / *生成灰度图像表示* / 
NSBitmapImageRep * greyRep =
[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:nil //为我们分配像素缓冲区
pixelsWide:xDim
pixelsHigh:yDim
bitsPerSample:8
samplesPerPixel:1
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSCalibratedWhiteColorSpace // 0 = black,1 = white in this color space
bytesPerRow:0 // passing 0表示你想出来
bitsPerPixel:8]; //这必须与bitsPerSample和samplesPerPixel一致

NSInteger rowBytes = [greyRep bytesPerRow];

unsigned char * pix = [greyRep bitmapData];
for(i = 0; i {
for(j = 0; j< xDim; ++ j)
{
pix [i * rowBytes + j] =(unsigned char)(255 *(samples [i * xDim + j]))
}
}

NSImage * greyscale = [[NSImage alloc] initWithSize:NSMakeSize(xDim,yDim)];
[greyscale addRepresentation:greyRep];
[greyRep release];






EDIT 回复评论)



我不知道是否支持16位样本,但你似乎已经确认它们是。



你看到的是仍然把像素视为 unsigned char ,这是8位。所以你只设置每一行的一半,你设置每个像素,一个字节一个字节到两个字节值 0xFF00 - 不完全真白,但非常接近。



你需要改为使用16位的图片,通过首先转换从rep中获取的值:

  unsigned short * pix =(unsigned short *)[greyRep bitmapData] ; 

然后为像素分配16位值:

  if(j%2)
{
pix [i * rowBytes + j] = 0xFFFF;
}
else
{
pix [i * rowBytes + j] = 0;
}



暂且, rowBytes 以字节为单位,因此我们需要使用 unsigned char 替换 pix 并在分配时投射,这是一个有点丑陋:

  if(j%2) 
{
*((unsigned short *)(pix + i * rowBytes + j * 2))= 0xFFFF;
}
else
{
*((unsigned short *)(pix + i * rowBytes + j * 2))= 0;
}

(我已经切换了子句的顺序,因为 == 0 似乎多余,实际上对于这样的东西,会更加使用?:语法, )


I have a large 1D dynamic array in my program that represents a FITS image on disk i.e. it holds all the pixel values of the image. The type of the array is double. At the moment, I am only concerned with monochrome images.

Since Cocoa does not support the FITS format directly, I am reading in the images using the CFITSIO library. This works - I can manipulate the array as I wish and save the result to disk using the library.

However, I now want to display the image. I presume this is something NSImage or NSView can do. But the class references don't seem to list a method which could take a C array and ultimately return an NSImage object. The closest I found was -initWithData:(NSData*). But I'm not 100% sure if this is what I need.

Am I barking up the wrong tree here? Any pointers to a class or method which could handle this would be

EDIT:

Here's the updated code. Note that I'm setting every pixel to 0xFFFF. This only results in a grey image.This is ofcourse just a test. When loading the actual FITS file, I replace 0xFFFF with imageArray[i * width + j]. This works perfectly in 8 bits (of course, I divide every pixel value by 256 to represent it in 8 bits).

 NSBitmapImageRep *greyRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
                                                                     pixelsWide:width
                                                                     pixelsHigh:height
                                                                  bitsPerSample:16
                                                                samplesPerPixel:1
                                                                       hasAlpha:NO
                                                                       isPlanar:NO
                                              colorSpaceName:NSCalibratedWhiteColorSpace
                                                                    bytesPerRow:0
                                                                   bitsPerPixel:16];

 NSInteger rowBytes = [greyRep bytesPerRow];
 unsigned short*pix = (unsigned short*)[greyRep bitmapData];
 NSLog(@"Row Bytes: %d",rowBytes);

if(temp.bitPix == 16) // 16 bit image
{
    for(i=0;i<height;i++)
    {
        for(j=0;j<width;j++)
        {
            pix[i * rowBytes + j] = 0xFFFF;
        }
    }
}

I also tried using Quartz2D directly. That does produce a proper image, even in 16 bits. But bizarrely, the data array takes 0xFF as white and not 0xFFFF. So I still have to divide everything by 0xFF - losing data in the process. Quartz2D code:

    short* grey = (short*)malloc(width*height*sizeof(short));
    for(int i=0;i<width*height; i++)
    {
        grey[i] = imageArray[i];
    }

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    CGContextRef bitmapContext = CGBitmapContextCreate(grey, width, height, 16, width*2, colorSpace, kCGImageAlphaNone);
    CFRelease(colorSpace);

    CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);

    NSImage *greyImage = [[NSImage alloc] initWithCGImage:cgImage size:NSMakeSize(width, height)];

Any suggestions?

解决方案

initWithData only works for image types that the system already knows about. For unknown types -- and raw pixel data -- you need to construct the image representation yourself. You can do this via Core Graphics as suggested in the answer that Kirby links to. Alternatively, you can use NSImage by creating and adding an NSBitmapImageRep.

The exact details will depend on the format of your pixel data, but here's an example of the process for a greyscale image where the source data (the samples array) is represented as double in the range [0,1]:

/* generate a greyscale image representation */
NSBitmapImageRep *greyRep =
    [[NSBitmapImageRep alloc]
        initWithBitmapDataPlanes: nil  // allocate the pixel buffer for us
                      pixelsWide: xDim 
                      pixelsHigh: yDim
                   bitsPerSample: 8
                 samplesPerPixel: 1  
                        hasAlpha: NO
                        isPlanar: NO 
                  colorSpaceName: NSCalibratedWhiteColorSpace // 0 = black, 1 = white in this color space
                     bytesPerRow: 0     // passing 0 means "you figure it out"
                    bitsPerPixel: 8];   // this must agree with bitsPerSample and samplesPerPixel

NSInteger rowBytes = [greyRep bytesPerRow];

unsigned char* pix = [greyRep bitmapData];
for ( i = 0; i < yDim; ++i )
{
    for ( j = 0; j < xDim; ++j )
    {
        pix[i * rowBytes + j] = (unsigned char)(255 * (samples[i * xDim + j]));
    }
}

NSImage* greyscale = [[NSImage alloc] initWithSize:NSMakeSize(xDim,yDim)];
[greyscale addRepresentation:greyRep];
[greyRep release];


EDIT (in response to comment)

I didn't know for sure whether 16 bit samples were supported, but you seem to have confirmed that they are.

What you're seeing stems from still treating the pixels as unsigned char, which is 8 bits. So you're only setting half of each row, and you're setting each of those pixels, one byte at a time, to the two byte value 0xFF00 -- not quite true white, but very close. The other half of the image is not touched, but would have been initialised to 0, so it stays black.

You need instead to work in 16 bit, by first casting the value you get back from the rep:

unsigned short * pix = (unsigned short*) [greyRep bitmapData];

And then assigning 16 bit values to the pixels:

if ( j % 2 )
{
    pix[i * rowBytes + j] = 0xFFFF;
}
else
{
    pix[i * rowBytes + j] = 0;
}

Scratch that, rowBytes is in bytes so we need to stick with unsigned char for pix and cast when assigning, which is a bit uglier:

if ( j % 2 )
{
    *((unsigned short*) (pix + i * rowBytes + j * 2)) = 0xFFFF;
}
else
{
    *((unsigned short*) (pix + i * rowBytes + j * 2)) = 0;
}

(I've switched the order of clauses because the == 0 seemed redundant. Actually for something like this it would be much neater to use ?: syntax, but enough of this C futzing.)

这篇关于从一个1D像素阵列的NSImage?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆