如何在MKOverlayView上显示图像? [英] How to display an image on a MKOverlayView?

查看:134
本文介绍了如何在MKOverlayView上显示图像?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

UPDATE:



使用MKOverlayView投影到MKMapView上的图片使用,用于图像的卫星是 Meteosat 0 Degree卫星。



这两个图像覆盖的表面大小相同,这是来自KML文件的LatLonBox,它指定了顶部,底部,右边和用于地面覆盖层的边界框的左侧对齐。

 < LatLonBox id =GE_MET0D_VP-MPE-latlonbox> 
< north> 57.4922< / north>
< south> -57.4922< / south>
< east> 57.4922< / east>
< west> -57.4922< / west>
< rotation> 0< / rotation>
< / LatLonBox>

我使用这些参数创建了一个名为RadarOverlay的新自定义MKOverlay对象,

  [[RadarOverlay alloc] initWithImageData:[[self.currentRadarData objectAtIndex:0] valueForKey:@Image] withLowerLeftCoordinate:CLLocationCoordinate2DMake(-57.4922,-57.4922)withUpperRightCoordinate :CLLocationCoordinate2DMake(57.4922,57.4922)]; 

自定义MKOverlay对象的实现; RadarOverlay

   - (id)initWithImageData(NSData *)imageData withLowerLeftCoordinate:(CLLocationCoordinate2D)lowerLeftCoordinate withUpperRightCoordinate:(CLLocationCoordinate2D)upperRightCoordinate 
{
self.radarData = imageData;

MKMapPoint lowerLeft = MKMapPointForCoordinate(lowerLeftCoordinate);
MKMapPoint upperRight = MKMapPointForCoordinate(upperRightCoordinate);

mapRect = MKMapRectMake(lowerLeft.x,upperRight.y,upperRight.x - lowerLeft.x,lowerLeft.y - upperRight.y);

return self;
}

- (CLLocationCoordinate2D)coordinate
{
return MKCoordinateForMapPoint(MKMapPointMake(MKMapRectGetMidX(mapRect),MKMapRectGetMidY(mapRect)))
}

- (MKMapRect)boundingMapRect
{
return mapRect;
}

自定义MKOverlayView,RadarOverlayView的实现



- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
RadarOverlay * radarOverlay =(RadarOverlay *)self.overlay;

UIImage * image = [[UIImage alloc] initWithData:radarOverlay.radarData];

CGImageRef imageReference = image.CGImage;

MKMapRect theMapRect = [self.overlay boundingMapRect];
CGRect theRect = [self rectForMapRect:theMapRect];
CGRect clipRect = [self rectForMapRect:mapRect];

NSUserDefaults * preferences = [NSUserDefaults standardUserDefaults];
CGContextSetAlpha(context,[preferences floatForKey:@RadarTransparency]);

CGContextAddRect(context,clipRect);
CGContextClip(context);

CGContextDrawImage(context,theRect,imageReference);

[image release];
}

当我下载图片时,我翻转图片, in the MKOverlayView

  size_t width =(CGImageGetWidth(imageReference)/ self.scaleFactor); 
size_t height =(CGImageGetHeight(imageReference)/ self.scaleFactor);

//计算指定图像的颜色空间
CGColorSpaceRef imageColorSpace = CGImageGetColorSpace(imageReference);

//为映像数据分配和清除内存
unsigned char * imageData =(unsigned char *)malloc(height * width * 4);
memset(imageData,0,height * width * 4);

//定义图像的矩形
CGRect imageRect;
if(image.imageOrientation == UIImageOrientationUp || image.imageOrientation == UIImageOrientationDown)
imageRect = CGRectMake(0,0,width,height);
else
imageRect = CGRectMake(0,0,height,width);

//通过定义颜色空间和存储数据的位置的地址创建imagecontext
CGContextRef imageContext = CGBitmapContextCreate(imageData,width,height,8,width * 4,imageColorSpace, kCGImageAlphaPremultipliedLast);

CGContextSaveGState(imageContext);

//将图像缩放到相反的方向,以便可以使用CGContectDrawImage进行easylier绘制
CGContextTranslateCTM(imageContext,0,height);
CGContextScaleCTM(imageContext,1.0,-1.0);

if(image.imageOrientation == UIImageOrientationLeft)
{
CGContextRotateCTM(imageContext,M_PI / 2);
CGContextTranslateCTM(imageContext,0,-width);
}
else if(image.imageOrientation == UIImageOrientationRight)
{
CGContextRotateCTM(imageContext, - M_PI / 2);
CGContextTranslateCTM(imageContext,-height,0);
}
else if(image.imageOrientation == UIImageOrientationDown)
{
CGContextTranslateCTM(imageContext,width,height);
CGContextRotateCTM(imageContext,M_PI);
}

//在上下文中绘制图像
CGContextDrawImage(imageContext,imageRect,imageReference);
CGContextRestoreGState(imageContext);

翻转图像后,我操作它,然后将其作为NSData对象存储在内存中。 / p>

看起来像图像被拉伸了,但它看起来完全在图像的中心,这是在赤道。

解决方案

您是否已经看过WWDC 2010视频的Session 127 - Customizing Maps with Overlays其中一个例子是地震数据,地震风险为0.5乘0.5度区域,并将它们映射。您的雷达数据看起来相似,基于正方形。示例代码具有一个名为HazardMaps的完整应用程序,该应用程序接收此数据并使用MKMapPoints创建一个叠加层。如果你还没有看过这个视频,我想它会给你很多有用的信息。



另一件事要检查的是EUMETSAT的数据是什么坐标系(基准面)。谷歌地图使用一个名为< a href =http://en.wikipedia.org/wiki/World_Geodetic_System> WGS-84 ,这是一个通用标准。但是有许多其他标准可以在世界不同地区提供更准确的位置。如果您在Google地图中使用不同标准的经度和纬度,则所有积分将被关闭一定金额。偏移量不一致,它会随着您在地图上移动而改变。 Google地图可能会对数据敏感,并在运行时转换为WGS-84。



您可以通过查看KML了解更多详情。我看了,但找不到最终的KML,与矩形。也许它给出了它在元数据中使用什么坐标系的信息。


UPDATE:

Images who are projected on the MKMapView using a MKOverlayView use the Mercator projection, while the image that I use as input data uses a WGS84 projection. Is there a way to convert the input image, to the right projection WGS84 -> Mercator, without tiling the image up and can it done on the fly?

Normally you could convert a image to right projection using the program gdal2tiles. The input data however changes every fifteen minutes, so the image has to be converted every fifteen minutes. So the conversion has to be done on the fly. I also want the tiling to be done by Mapkit and not by myself using gdal2tiles or the GDAL framework.

UPDATE END

I'm currently working on a project which displays a rainfall radar over some part of the world. The radar image is provided by EUMETSAT, they offer a KML file which can be loaded into Google Earth or Google Maps. If I load the KML file in Google Maps it displays perfectly, but if I draw the image using a MKOverlayView on a MKMapView, the image is slightly of.

For example, on the left side, Google Maps and on the right side the same image is displayed at a MKMapView.

The surface that the image covers can be viewed on Google Maps, the satellite that is used for the image is the "Meteosat 0 Degree" satellite.

The surface that both images cover is of the same size, this is the LatLonBox from the KML file, it specifies where the top, bottom, right, and left sides of a bounding box for the ground overlay are aligned.

  <LatLonBox id="GE_MET0D_VP-MPE-latlonbox">
        <north>57.4922</north>
        <south>-57.4922</south>
        <east>57.4922</east>
        <west>-57.4922</west>
        <rotation>0</rotation>
  </LatLonBox>

I create a new custom MKOverlay object called RadarOverlay with these parameters,

[[RadarOverlay alloc] initWithImageData:[[self.currentRadarData objectAtIndex:0] valueForKey:@"Image"] withLowerLeftCoordinate:CLLocationCoordinate2DMake(-57.4922, -57.4922) withUpperRightCoordinate:CLLocationCoordinate2DMake(57.4922, 57.4922)];

The implementation of the custom MKOverlay object; RadarOverlay

- (id) initWithImageData:(NSData*) imageData withLowerLeftCoordinate:(CLLocationCoordinate2D)lowerLeftCoordinate withUpperRightCoordinate:(CLLocationCoordinate2D)upperRightCoordinate
{
     self.radarData = imageData;

     MKMapPoint lowerLeft = MKMapPointForCoordinate(lowerLeftCoordinate);
     MKMapPoint upperRight = MKMapPointForCoordinate(upperRightCoordinate);

     mapRect = MKMapRectMake(lowerLeft.x, upperRight.y, upperRight.x - lowerLeft.x, lowerLeft.y - upperRight.y);

     return self;
}

- (CLLocationCoordinate2D)coordinate
{
     return MKCoordinateForMapPoint(MKMapPointMake(MKMapRectGetMidX(mapRect), MKMapRectGetMidY(mapRect)));
}

- (MKMapRect)boundingMapRect
{
     return mapRect;
}

The implementation of the custom MKOverlayView, RadarOverlayView

- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
    RadarOverlay* radarOverlay = (RadarOverlay*) self.overlay;

    UIImage *image          = [[UIImage alloc] initWithData:radarOverlay.radarData];

    CGImageRef imageReference = image.CGImage;

    MKMapRect theMapRect    = [self.overlay boundingMapRect];
   CGRect theRect           = [self rectForMapRect:theMapRect];
    CGRect clipRect     = [self rectForMapRect:mapRect];

    NSUserDefaults *preferences = [NSUserDefaults standardUserDefaults];
    CGContextSetAlpha(context, [preferences floatForKey:@"RadarTransparency"]);

    CGContextAddRect(context, clipRect);
    CGContextClip(context);

    CGContextDrawImage(context, theRect, imageReference);

    [image release]; 
}

When I download the image, I flip the image so it can be easily drawn in the MKOverlayView

size_t width    = (CGImageGetWidth(imageReference) / self.scaleFactor);
size_t height   = (CGImageGetHeight(imageReference) / self.scaleFactor);

// Calculate colorspace for the specified image
CGColorSpaceRef imageColorSpace = CGImageGetColorSpace(imageReference);

// Allocate and clear memory for the data of the image
unsigned char *imageData = (unsigned char*) malloc(height * width * 4);
memset(imageData, 0, height * width * 4);

// Define the rect for the image
CGRect imageRect;
if(image.imageOrientation==UIImageOrientationUp || image.imageOrientation==UIImageOrientationDown) 
    imageRect = CGRectMake(0, 0, width, height); 
else 
    imageRect = CGRectMake(0, 0, height, width); 

// Create the imagecontext by defining the colorspace and the address of the location to store the data
CGContextRef imageContext = CGBitmapContextCreate(imageData, width, height, 8, width * 4, imageColorSpace, kCGImageAlphaPremultipliedLast);

CGContextSaveGState(imageContext);

// Scale the image to the opposite orientation so it can be easylier drawn with CGContectDrawImage
CGContextTranslateCTM(imageContext, 0, height);
CGContextScaleCTM(imageContext, 1.0, -1.0);

if(image.imageOrientation==UIImageOrientationLeft) 
{
    CGContextRotateCTM(imageContext, M_PI / 2);
    CGContextTranslateCTM(imageContext, 0, -width);
}
else if(image.imageOrientation==UIImageOrientationRight) 
{
    CGContextRotateCTM(imageContext, - M_PI / 2);
    CGContextTranslateCTM(imageContext, -height, 0);
} 
else if(image.imageOrientation==UIImageOrientationDown) 
{
    CGContextTranslateCTM(imageContext, width, height);
    CGContextRotateCTM(imageContext, M_PI);
}

// Draw the image in the context
CGContextDrawImage(imageContext, imageRect, imageReference);
CGContextRestoreGState(imageContext);

After I flipped the image, I manipulate it and then store it in memory as a NSData object.

It looks like the image got stretched, but it looks allright at the center of the image, which is at the equator.

解决方案

Have you already seen "Session 127 - Customizing Maps with Overlays" from the WWDC 2010 videos? One of the examples takes earthquake data, which gives the earthquake risk for 0.5 by 0.5 degree areas and maps them. Your radar data looks similar, based on squares. The sample code has a full application called HazardMaps, which takes this data and creates an overlay using MKMapPoints. If you haven't already seen this video, I think it will give you plenty of useful information. He also talks about converting to the Mercator projection.

Another thing to check is what coordinate system (datum) the data from EUMETSAT is in. Google Maps uses a system called WGS-84, which is a general standard. But there are many other standards which can give more accurate positions in different parts of the world. If you use the latitude and longitude from a different standard in Google Maps, all your points will be off by a certain amount. The offset is not consistent, it changes as you move around the map. It's possible that Google Maps is being smart about the data and converting to WGS-84 on the fly.

You might find out more details by looking at the KML. I looked but couldn't find the final KML, with the rectangles. Perhaps it gives information about what coordinate system it's using in the metadata.

这篇关于如何在MKOverlayView上显示图像?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆