在iphone上绘制屏幕缓冲区的最快方法 [英] fastest way to draw a screen buffer on the iphone

查看:156
本文介绍了在iphone上绘制屏幕缓冲区的最快方法的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个软件渲染器,我正在从PC移植到iPhone。使用iphone上的像素缓冲区手动更新屏幕的最快方法是什么?例如在Windows中,我发现的最快的功能是SetDIBitsToDevice。



我对iphone或库不太了解,而且似乎有这么多层和不同类型的UI元素,所以我可能需要大量的解释...



现在我只是不断更新opengl中的纹理并渲染在屏幕上,我非常怀疑这将是最好的方法。



更新:



我尝试过openGL屏幕大小的纹理方法:



我得到了17fps ......



我使用了512x512纹理(因为它需要2的幂)



只需调用

  glTexSubImage2D(GL_TEXTURE_2D,0,0,0,512,512,GL_RGBA,GL_UNSIGNED_BYTE ,baseWindowGUI-> GetBuffer()); 

似乎对所有减速负责。



对其进行评论,并留下我所有的软件渲染GUI代码,以及现在非更新纹理的渲染,导致60fps,30%的渲染器使用率,以及来自cpu的没有明显的峰值。



请注意,GetBuffer()只返回一个指向GUI系统软件后台缓冲区的指针,无论如何都没有重新调整缓冲区或调整缓冲区大小,它的大小和格式都适合于纹理,所以我很确定减速与软件渲染器无关,这是个好消息,看起来如果我能找到一种方法在60点更新屏幕,软件渲染应该暂时有用。 / p>

我尝试用512,320而不是512,512进行更新纹理调用这奇怪甚至更慢......以10fps运行,同时它表示渲染利用率仅为5%,并且在打电话给Untwiddle32bpp insi时,所有的时间都被浪费了de openGLES。



我可以将我的软件渲染更改为原生渲染为任何pixle格式,如果它会导致更直接的blit。



fyi,在2.2.1 ipod touch G2上测试(就像类固醇上的Iphone 3G一样)



更新2:



我刚刚写完了CoreAnimation / Graphics方法,它看起来不错,但是我有点担心它如何更新每一帧的屏幕,基本上抛弃旧的CGImage,创建一个全新的..在下面的'someRandomFunction'中查看:
这是更新图像的最快方法吗?任何帮助将不胜感激。

  // 
// catestAppDelegate.m
// catest
//
//由用户于3/14/10创建。
//版权所有__MyCompanyName__ 2010.保留所有权利。
//




#importcatestAppDelegate.h
#importcatestViewController.h
#import QuartzCore / QuartzCore.h

const void * GetBytePointer(void * info)
{
//这个目前只调用一次
返回信息; // info是指向缓冲区的指针
}

void ReleaseBytePointer(void * info,const void * pointer)
{
//不在乎,只需使用一个静态缓冲区
}


size_t GetBytesAtPosition(void * info,void * buffer,off_t position,size_t count)
{
//我认为这不会被称为
memcpy(buffer,((char *)info)+ position,count);
返回计数;
}

CGDataProviderDirectCallbacks providerCallbacks =
{0,GetBytePointer,ReleaseBytePointer,GetBytesAtPosition,0};


静态CGImageRef cgIm;

静态CGDataProviderRef dataProvider;
unsigned char * imageData;
const size_t imageDataSize = 320 * 480 * 4;
NSTimer * animationTimer;
NSTimeInterval animationInterval = 1.0f / 60.0f;


@implementation catestAppDelegate

@synthesize window;
@synthesize viewController;


- (void)applicationDidFinishLaunching:(UIApplication *)application {


[window makeKeyAndVisible];


const size_t byteRowSize = 320 * 4;
imageData = malloc(imageDataSize);

for(int i = 0; i< imageDataSize / 4; i ++)
((unsigned int *)imageData)[i] = 0xFFFF00FF; //只需将其设置为一些随机初始颜色,当前为黄色


CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
dataProvider =
CGDataProviderCreateDirect(imageData,imageDataSize,
& providerCallbacks); //目前全球

cgIm = CGImageCreate
(320,480,
8,32,320 * 4,colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrder32Little,
dataProvider,0,false,kCGRenderingIntentDefault); //也是全球的,可能不需要是

self.window.layer.contents = cgIm; //将UIWindow的CALayer内容设置为图像,yay正常工作!

// CGImageRelease(cgIm); //我们应该在某个阶段这样做...
// CGDataProviderRelease(dataProvider);

animationTimer = [NSTimer scheduledTimerWithTimeInterval:animationInterval target:self selector:@selector(someRandomFunction)userInfo:nil repeats:YES];
//设置计时器以尝试更新图像

}
float col = 0;

- (void)someRandomFunction
{
//更新原始缓冲区
for(int i = 0; i< imageDataSize; i ++)
imageData [i] =(unsigned char)(int)col;

col + = 256.0f / 60.0f;

//并且目前我知道如何将缓冲区更新应用到屏幕上的唯一方法是
//创建一个新图像并将其绑定到图层... ???
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

cgIm = CGImageCreate
(320,480,
8,32,320 * 4,colorSpace,
kCGImageAlphaNone | kCGBitmapByteOrder32Little,
dataProvider,0, false,kCGRenderingIntentDefault);

CGColorSpaceRelease(colorSpace);

self.window.layer.contents = cgIm;

//并且目前有效,更新屏幕,但我不知道它的运行情况...
}


- (void)dealloc {
[viewController release];
[窗口发布];
[super dealloc];
}


@end


解决方案

App Store批准的最快的仅使用CPU的2D图形方法是使用创建一个由缓冲区支持的 CGImage CGDataProviderCreateDirect 并将其分配给 CALayer 内容属性。



为获得最佳效果,请使用 kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little kCGImageAlphaNone | kCGBitmapByteOrder32Little 位图类型和双缓冲区,以便显示永远不会处于不一致状态。



编辑:这应该比绘制到OpenGL更快纹理在理论上,但一如既往,简介确定。



edit2: CADisplayLink 是一个有用的类,无论如何你使用哪种合成方法。


I have a "software renderer" that I am porting from PC to the iPhone. what is the fastest way to manually update the screen with a buffer of pixels on the iphone? for instance in windows the fastest function I have found is SetDIBitsToDevice.

I don't know much about the iphone, or the libraries, and there seem to be so many layers and different types of UI elements, so I might need a lot of explanation...

for now I'm just going to constantly update a texture in opengl and render that to the screen, I very much doubt that this is going to be the best way to do it.

UPDATE:

I have tried the openGL screen sized texture method:

I got 17fps...

I used a 512x512 texture (because it needs to be a power of two)

just the call of

glTexSubImage2D(GL_TEXTURE_2D,0,0,0,512,512,GL_RGBA,GL_UNSIGNED_BYTE, baseWindowGUI->GetBuffer());

seemed pretty much responsible for ALL the slow down.

commenting it out, and leaving in all my software rendering GUI code, and the rendering of the now non updating texture, resulted in 60fps, 30% renderer usage, and no notable spikes from the cpu.

note that GetBuffer() simply returns a pointer to the software backbuffer of the GUI system, there is no re-gigging or resizing of the buffer in anyway, it is properly sized and formatted for the texture, so I am fairly certain the slowdown has nothing to do with the software renderer, which is the good news, it looks like if I can find a way to update the screen at 60, software rendering should work for the time being.

I tried doing the update texture call with 512,320 rather than 512,512 this was oddly even slower... running at 10fps, also it says the render utilization is only like 5%, and all the time is being wasted in a call to Untwiddle32bpp inside openGLES.

I can change my software render to natively render to any pixle format, if it would result in a more direct blit.

fyi, tested on a 2.2.1 ipod touch G2 (so like an Iphone 3G on steroids)

UPDATE 2:

I have just finished writting the CoreAnimation/Graphics method, it looks good, but I am a little worried about how it updates the screen each frame, basically ditching the old CGImage, creating a brand new one... check it out in 'someRandomFunction' below: is this the quickest way to update the image? any help would be greatly appreciated.

//
//  catestAppDelegate.m
//  catest
//
//  Created by User on 3/14/10.
//  Copyright __MyCompanyName__ 2010. All rights reserved.
//




#import "catestAppDelegate.h"
#import "catestViewController.h"
#import "QuartzCore/QuartzCore.h"

const void* GetBytePointer(void* info)
{
    // this is currently only called once
    return info; // info is a pointer to the buffer
}

void ReleaseBytePointer(void*info, const void* pointer)
{
    // don't care, just using the one static buffer at the moment
}


size_t GetBytesAtPosition(void* info, void* buffer, off_t position, size_t count)
{
    // I don't think this ever gets called
    memcpy(buffer, ((char*)info) + position, count);
    return count;
}

CGDataProviderDirectCallbacks providerCallbacks =
{ 0, GetBytePointer, ReleaseBytePointer, GetBytesAtPosition, 0 };


static CGImageRef cgIm;

static CGDataProviderRef dataProvider;
unsigned char* imageData;
 const size_t imageDataSize = 320 * 480 * 4;
NSTimer *animationTimer;
NSTimeInterval animationInterval= 1.0f/60.0f;


@implementation catestAppDelegate

@synthesize window;
@synthesize viewController;


- (void)applicationDidFinishLaunching:(UIApplication *)application {    


    [window makeKeyAndVisible];


    const size_t byteRowSize = 320 * 4;
    imageData = malloc(imageDataSize);

    for(int i=0;i<imageDataSize/4;i++)
            ((unsigned int*)imageData)[i] = 0xFFFF00FF; // just set it to some random init color, currently yellow


    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    dataProvider =
    CGDataProviderCreateDirect(imageData, imageDataSize,
                               &providerCallbacks);  // currently global

    cgIm = CGImageCreate
    (320, 480,
     8, 32, 320*4, colorSpace,
     kCGImageAlphaNone | kCGBitmapByteOrder32Little,
     dataProvider, 0, false, kCGRenderingIntentDefault);  // also global, probably doesn't need to be

    self.window.layer.contents = cgIm; // set the UIWindow's CALayer's contents to the image, yay works!

   // CGImageRelease(cgIm);  // we should do this at some stage...
   // CGDataProviderRelease(dataProvider);

    animationTimer = [NSTimer scheduledTimerWithTimeInterval:animationInterval target:self selector:@selector(someRandomFunction) userInfo:nil repeats:YES];
    // set up a timer in the attempt to update the image

}
float col = 0;

-(void)someRandomFunction
{
    // update the original buffer
    for(int i=0;i<imageDataSize;i++)
        imageData[i] = (unsigned char)(int)col;

    col+=256.0f/60.0f;

    // and currently the only way I know how to apply that buffer update to the screen is to
    // create a new image and bind it to the layer...???
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    cgIm = CGImageCreate
    (320, 480,
     8, 32, 320*4, colorSpace,
     kCGImageAlphaNone | kCGBitmapByteOrder32Little,
     dataProvider, 0, false, kCGRenderingIntentDefault);

    CGColorSpaceRelease(colorSpace);

    self.window.layer.contents = cgIm;

    // and that currently works, updating the screen, but i don't know how well it runs...
}


- (void)dealloc {
    [viewController release];
    [window release];
    [super dealloc];
}


@end

解决方案

The fastest App Store approved way to do CPU-only 2D graphics is to create a CGImage backed by a buffer using CGDataProviderCreateDirect and assign that to a CALayer's contents property.

For best results use the kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little or kCGImageAlphaNone | kCGBitmapByteOrder32Little bitmap types and double buffer so that the display is never in an inconsistent state.

edit: this should be faster than drawing to an OpenGL texture in theory, but as always, profile to be sure.

edit2: CADisplayLink is a useful class no matter which compositing method you use.

这篇关于在iphone上绘制屏幕缓冲区的最快方法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆