libgdx TiledMap渲染性能问题 [英] libgdx TiledMap Rendering Performance Issue

查看:137
本文介绍了libgdx TiledMap渲染性能问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

所以我的libgdx项目遇到性能问题,并一直追踪到地图渲染. 我通过创建一个空项目来隔离问题,并尽可能少地做,只是地图渲染.这是我想出的代码:

So I had performance issues with my libgdx project and I tracked it down to the map rendering. I isolated the issue by creating an empty project and do as little as possible, just the map rendering. This is the code I came up with:

桌面启动项目类:

package com.me.test;

import com.badlogic.gdx.backends.lwjgl.LwjglApplication;
import com.badlogic.gdx.backends.lwjgl.LwjglApplicationConfiguration;

public class Main {
public static void main(String[] args) {
    LwjglApplicationConfiguration cfg = new LwjglApplicationConfiguration();
    cfg.title = "performanceTest";
    cfg.useGL20 = false; // doesn't make a difference...
    cfg.width = 1080;
    cfg.height = cfg.width/12 * 9; // 810

    new LwjglApplication(new Test(), cfg);
}
}

实际代码:

import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.GL10;
import com.badlogic.gdx.graphics.OrthographicCamera;
import com.badlogic.gdx.maps.tiled.TmxMapLoader;
import com.badlogic.gdx.maps.tiled.renderers.OrthogonalTiledMapRenderer;

public class Test implements ApplicationListener {
private OrthographicCamera camera;
private com.badlogic.gdx.maps.tiled.TiledMap map;
private static  OrthogonalTiledMapRenderer renderer;

@Override
public void create() {      
    float w = Gdx.graphics.getWidth();
    float h = Gdx.graphics.getHeight();

    camera = new OrthographicCamera(w, h);

    TmxMapLoader maploader = new TmxMapLoader();

    map = maploader.load("test.tmx");
    renderer = new OrthogonalTiledMapRenderer(map, 1);
    renderer.setView(camera);

}

@Override
public void dispose() {
    renderer.dispose();
    map.dispose();
}

@Override
public void render() {      
    Gdx.gl.glClearColor(1, 1, 1, 1);
    Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);

    renderer.render();
}

@Override
public void resize(int width, int height) {
}

@Override
public void pause() {
}

@Override
public void resume() {
}
}

现在奇怪的是:不仅CPU使用率高得离谱,而且在必须渲染128个以上时,它会产生跳跃.低于129个图块时,性能始终相同. (仅此过程大约需要2-4%),但是渲染129个或更多的图块大约需要40-60%! 现在我的问题是:为什么呢?难道我做错了什么? 我无法想象libgdx的渲染器会有如此致命的缺陷……而仅使用屏幕上的128个图块制作游戏是不可行的:)

Now the weird thing is: not only is the CPU usage ridiculously high, but it makes a jump when more than 128 have to be rendered. Below 129 tiles the performance is always the same. ( Only this process takes about 2-4% ) But rendering 129 tiles or more takes about 40 - 60%! Now my question is: why is that? Am I doing something wrong? I can't imagine the renderer from libgdx would have such a fatal flaw... and making a game only using 128 tiles on screen isn't an option :)

感谢您的任何回答或想法!

Thanks for any answers or thoughts!

环境:

  • Eclipse Kepler服务版本1
  • libgdx版本0.9.9
  • Windows 7
  • 图形芯片:NVIDIA GeForce GTX 560 Ti
  • CPU:奔腾双核2.70GHz

渲染128个图块:

渲染129个图块:

推荐答案

我找到了解决方案:

只需添加

Mesh.forceVBO=true;

在应用启动之前.

首先,这可能是硬件问题.在其他计算机上,一切运行均平稳,但也有其他计算机出现相同问题.在BadlogicGames论坛上,我找到了答案,还有更多详细信息:

first of all this is probably a hardware issue. On other computers everything runs smoothly but there have been others that had the same issue. On the BadlogicGames Forum I found the answer more details are there:

Badlogic论坛帖子

这篇关于libgdx TiledMap渲染性能问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆