如何使用Assimp获取模型动画? [英] How do I get models to animate using Assimp?

查看:228
本文介绍了如何使用Assimp获取模型动画?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

目前,我正在尝试使用OpenGL使用C ++制作游戏引擎,并希望使3D动画能够正常工作。建议我使用Assimp,并能够找到一个使静态模型起作用的教程,但是我不知道从哪里开始动画。我一直在尝试使用Google,但找不到任何可行的方法。如何修改代码以获取动画?建议使用哪种文件格式?



这是我当前拥有的代码:

  // Mesh.h 
#include< string>

#include glut\include\GL\glew.h
#include glut\include\GL\glutw.h

#include< assimp / Importer.hpp> // C ++导入程序接口
#include< assimp / scene.h> //输出数据结构
#include< assimp / postprocess.h> //后处理fla

//纹理
#include< SOIL.h>

class Mesh
{
public:
Mesh(void);
Mesh(std :: string filename,std :: stringtextureFilename,float x,float y,float z,float width,float height,float depth,float rotX,float rotY,float rotZ);
〜Mesh(void);

void Init(std :: string文件名);
void LoadTexture(std :: string textureName);
void Draw();

私人:
GLfloat * vertexArray;
GLfloat * normalArray;
GLfloat * uvArray;

GLint numVerts;

GLuint m_Texture [1];

浮点数m_CenterX,m_CenterY,m_CenterZ,m_Width,m_Height,m_Depth;
float m_XRotation,m_YRotation,m_ZRotation;
};

//Mesh.cpp
#include Mesh.h

Mesh :: Mesh(void)
{
}

Mesh :: Mesh(std :: string filename,std :: string textureFilename,float x,float y,float z,float width,float height,float depth,float rotX,float rotY,float rotZ )
{
//填写变量
Init(filename);
LoadTexture(textureFilename);
}

Mesh ::〜Mesh(void)
{

}

void Mesh :: Init(std :: string filename)
{
Assimp :: Importer importer;
const aiScene *场景= importer.ReadFile(filename,aiProcessPreset_TargetRealtime_Fast); // aiProcessPreset_TargetRealtime_Fast具有您需要的配置

aiMesh * mesh = scene-> mMeshes [0]; //假设您只想要第一个网格物体

numVerts = mesh-> mNumFaces * 3;

vertexArray = new float [mesh-> mNumFaces * 3 * 3];
normalArray = new float [mesh-> mNumFaces * 3 * 3];
uvArray = new float [mesh-> mNumFaces * 3 * 2];

for(unsigned int i = 0; i-mesh-mNumFaces; i ++)
{
const aiFace&面=网格-> mFaces [i];

for(int j = 0; j <3; j ++)
{
aiVector3D uv = mesh-> mTextureCoords [0] [face.mIndices [j]];
memcpy(uvArray,& uv,sizeof(float)* 2);
uvArray + = 2;

aiVector3D normal = mesh-> mNormals [face.mIndices [j]];
memcpy(normalArray,& normal,sizeof(float)* 3);
normalArray + = 3;

aiVector3D pos = mesh-> mVertices [face.mIndices [j]];
memcpy(vertexArray,& pos,sizeof(float)* 3);
vertexArray + = 3;
}
}

uvArray- = mesh-> mNumFaces * 3 * 2;
normalArray- = mesh-> mNumFaces * 3 * 3;
vertexArray- = mesh-> mNumFaces * 3 * 3;
}

void Mesh :: LoadTexture(std :: string textureName)
{
glGenTextures(1,& m_Texture [0]);
glBindTexture(GL_TEXTURE_2D,m_Texture [0]);
//设置纹理参数
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_REPEAT);
//设置纹理过滤
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); //注意GL_NEAREST在这里!
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); //注意GL_NEAREST在这里!

m_Texture [0] = SOIL_load_OGL_texture //将图像文件直接加载为新的OpenGL纹理

textureName.c_str(),
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_MIPMAPS | SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_COMPRESS_TO_DXT
);
}

void Mesh :: Draw()
{
glPushMatrix();
glTranslatef(m_CenterX,m_CenterY,m_CenterZ);

glRotatef(m_XRotation,1,0,0);
glRotatef(m_YRotation,0,1,0);
glRotatef(m_ZRotation,0,0,1);

glScalef(m_Width,m_Height,m_Depth);

glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);

glNormalPointer(GL_FLOAT,0,normalArray);
glTexCoordPointer(2,GL_FLOAT,0,uvArray);
glVertexPointer(3,GL_FLOAT,0,vertexArray);

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,m_Texture [0]);
glDrawArrays(GL_TRIANGLES,0,numVerts);

glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glPopMatrix();
}


解决方案

这是一个老问题,但我确定它可能会被其他人将来使用,所以我将尝试概述使用Assimp库进行动画处理时所具有的一些选项。



首先,我想提一提,您可以在没有Assimp库的情况下进行动画处理。该库只是为您提供了一种加载模型的好方法,但是您发现它不会为您制作动画。



从概念上讲,无论您是否使用Assimp,动画都将非常相似。例如;如果您编写了自己的模型加载器,则可以轻松地使用它而不是Assimp并仍然可以进行动画处理。但是,由于制作动画的方式不止一种,因此实现动画的方式可能会受到更多限制,因为在没有Assimp的情况下进行骨架动画将涉及编写模型加载器,该加载器可以从中获取骨骼变换,权重和各种数据



制作动画有多种方法。无论是技术方面,还是您是否想通过硬件加速(在GPU或CPU上)来实现。我要提到的是这里的一些选项,因为大多数人都使用Assimp来做骨骼动画,如果您的数学技能不强并且只想要易于组合的东西,这可能会令人生畏。 / p>

通常有三种制作动画的方法:


  1. 关键帧

  2. 带插值的Keframe

  3. 骨骼动画(硬件蒙皮)

关键帧



关键帧动画是当您为动画的每个帧创建一个单独的模型时,类似于2D精灵表。您可以连续渲染模型以产生动画。该方法可能是最简单但最幼稚的实现,因为您需要为每个动画加载多个模型。这些框架之间的过渡可能会很明显,具体取决于您生成的框架数量,并且在看起来可接受之前,您可能需要导出几个模型文件。
此方法的另一个缺点是您可能需要制作自己的模型。



带插值的关键帧



此方法与上述方法类似,但是不是将每个关键帧生成为单独的模型,而是仅生成几个关键帧,并使用内插法由引擎生成缺失模型。之所以可以这样做,是因为如果知道顶点的起点和终点,我们可以进行插值以找出在时间= t处顶点应位于的位置。



本教程在解释如何制作关键帧动画方面做得很好:



https://www.khronos.org/opengl/wiki/Keyframe_Animation



同样,它没有谈论Assimp,但是概念是相同的,您仍然可以使用Assimp加载模型。
这种动画形式的实现非常简单,对初学者来说非常好。但是,它确实有一些缺点。
如果您选择采用这种方法,则可能会受到内存的限制,因为这种方法会在VBO上消耗大量内存,这取决于模型的详细程度。
如果选择创建自己的模型,则还需要保留模型文件中的顶点顺序,以便在一个模型文件(关键帧1)的顶点2到另一个模型文件(关键帧)的顶点2之间进行插值2)是正确的。



骨骼动画



这可能是制作动画最困难的方法,但是它处理方法1和2中的许多问题。您还会发现,通过执行骨骼动画,您可以加载许多较新的文件格式;那些指定了骨骼转换和旋转的对象,而不必为每个关键帧加载新文件。



在这种情况下,我认为使用Assimp会很有好处。 Assimp非常有能力处理从模型文件中获取所需数据来进行骨骼动画。



如果您有兴趣进行骨骼动画,则本教程是一个甚至可以使用Assimp。



http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html



本教程由我自己完成,以在自己的引擎中实现骨骼动画。因此,如果您决定走这条路,我强烈建议您阅读此书。



我要提到的最后一件事是,我注意到一些人感到困惑的是动画可以



在我提供的上一教程链接中,这两个都是使用硬件加速进行动画处理的。在这种情况下,这意味着顶点计算是在顶点着色器主体中的GPU上完成的。



但是,我知道很多人可能不熟悉现代OpenGL,在这种情况下,您仍然可以在CPU上进行相同的计算。这里的想法是查看顶点着色器中发生的事情,并创建一个为您执行这些计算的函数。



您还询问了动画的文件格式;这将取决于您执行动画的路线。如果要对.fbx和.md5等格式进行动画处理,则可能需要进行骨骼动画处理。如果您选择关键帧动画,我可能会坚持使用.obj,这是我发现最易于使用的格式,因为格式规范非常容易理解。



在调试引擎中的动画时,请确保您有一个知道有效的文件;另一个陷阱是,在互联网上下载免费模型可以包含任何旧格式,绝对纹理路径和不同的坐标系(Y向上或Z向上)等。


Currently I'm trying to make a game engine in C++ with OpenGL and want to get 3D animations to work. I have been advised to use Assimp and was able to find a tutorial to get static models to work, but I have no idea where to even start with animations. I have been trying to Google it, but haven't been able to find anything that works. How can I modify my code to get animations? What file format is recommended for it?

This is the code I have currently:

//Mesh.h    
#include <string>

#include "glut\include\GL\glew.h"
#include "glut\include\GL\glut.h"

#include <assimp/Importer.hpp>      // C++ importer interface
#include <assimp/scene.h>           // Output data structure
#include <assimp/postprocess.h>     // Post processing fla

//textures
#include <SOIL.h>

class Mesh
{
public:
    Mesh(void);
    Mesh(std::string filename, std::string textureFilename, float x, float y, float z, float width, float height, float depth, float rotX, float rotY, float rotZ);
    ~Mesh(void);

    void Init(std::string filename);
    void LoadTexture(std::string textureName);
    void Draw();

private:
    GLfloat *vertexArray;
    GLfloat *normalArray;
    GLfloat *uvArray;

    GLint numVerts;

    GLuint m_Texture[1];

    float m_CenterX, m_CenterY, m_CenterZ, m_Width, m_Height, m_Depth;
    float m_XRotation, m_YRotation, m_ZRotation;
};

//Mesh.cpp
#include "Mesh.h"

Mesh::Mesh(void)
{
}

Mesh::Mesh(std::string filename, std::string textureFilename, float x, float y, float z, float width, float height, float depth, float rotX, float rotY, float rotZ)
{
    //fills in variables
    Init(filename);
    LoadTexture(textureFilename);
}

Mesh::~Mesh(void)
{

}

void Mesh::Init(std::string filename)
{
    Assimp::Importer importer;
    const aiScene *scene = importer.ReadFile(filename,aiProcessPreset_TargetRealtime_Fast);//aiProcessPreset_TargetRealtime_Fast has the configs you'll need

    aiMesh *mesh = scene->mMeshes[0]; //assuming you only want the first mesh

    numVerts = mesh->mNumFaces*3;

    vertexArray = new float[mesh->mNumFaces*3*3];
    normalArray = new float[mesh->mNumFaces*3*3];
    uvArray = new float[mesh->mNumFaces*3*2];

    for(unsigned int i=0;i<mesh->mNumFaces;i++)
    {
        const aiFace& face = mesh->mFaces[i];

        for(int j=0;j<3;j++)
        {
            aiVector3D uv = mesh->mTextureCoords[0][face.mIndices[j]];
            memcpy(uvArray,&uv,sizeof(float)*2);
            uvArray+=2;

            aiVector3D normal = mesh->mNormals[face.mIndices[j]];
            memcpy(normalArray,&normal,sizeof(float)*3);
            normalArray+=3;

            aiVector3D pos = mesh->mVertices[face.mIndices[j]];
            memcpy(vertexArray,&pos,sizeof(float)*3);
            vertexArray+=3;
        }
    }

    uvArray-=mesh->mNumFaces*3*2;
    normalArray-=mesh->mNumFaces*3*3;
    vertexArray-=mesh->mNumFaces*3*3;
}

void Mesh::LoadTexture(std::string textureName)         
{
    glGenTextures(1, &m_Texture[0]);
    glBindTexture(GL_TEXTURE_2D, m_Texture[0]);
    // Set our texture parameters
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    // Set texture filtering
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);  // NOTE the GL_NEAREST Here! 
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);  // NOTE the GL_NEAREST Here! 

    m_Texture[0] = SOIL_load_OGL_texture // load an image file directly as a new OpenGL texture
    (
        textureName.c_str(),
        SOIL_LOAD_AUTO,
        SOIL_CREATE_NEW_ID,
        SOIL_FLAG_MIPMAPS | SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_COMPRESS_TO_DXT
    );
}

void Mesh::Draw()
{
    glPushMatrix();
        glTranslatef(m_CenterX, m_CenterY, m_CenterZ);

        glRotatef(m_XRotation, 1, 0, 0);
        glRotatef(m_YRotation, 0, 1, 0);
        glRotatef(m_ZRotation, 0, 0, 1);

        glScalef(m_Width, m_Height, m_Depth);

        glEnableClientState(GL_NORMAL_ARRAY);
        glEnableClientState(GL_TEXTURE_COORD_ARRAY);
        glEnableClientState(GL_VERTEX_ARRAY);

            glNormalPointer(GL_FLOAT,0,normalArray);
            glTexCoordPointer(2,GL_FLOAT,0,uvArray);
            glVertexPointer(3,GL_FLOAT,0,vertexArray);

            glActiveTexture(GL_TEXTURE0);
            glBindTexture(GL_TEXTURE_2D, m_Texture[0]);
            glDrawArrays(GL_TRIANGLES,0,numVerts);

        glDisableClientState(GL_NORMAL_ARRAY);
        glDisableClientState(GL_TEXTURE_COORD_ARRAY);
        glDisableClientState(GL_VERTEX_ARRAY);
    glPopMatrix();
}

解决方案

This is an old question but I'm sure it could be of future use to others so I'll try to outline some options you have for doing animation with the Assimp library.

First off, I'd just like to mention that you can do animation without the Assimp library. The library just provides you with a nice way to load your models, but as you've discovered it won't do the animation for you.

Conceptually animation is going to be fairly similar regardless of whether or not you use Assimp. For example; if you have written your own model loader you could easily use that instead of Assimp and still be able to do animation. However, as there is more than one way to do animation, you may be more limited in the way you can achieve it as doing skeletal animation without Assimp would involve writing a model loader that can get the bone transforms, weights and various data out of the model files, and that could take a while.

There are multiple ways to do the animation; both in terms of technique and whether you want to do it with hardware acceleration (on the GPU or on the CPU). I'm going to mention a few of the options you have here since most people use Assimp to do skeletal animation which can be pretty intimidating if you aren't strong in your math skills and just want something that is easy to put together.

Generally there are three accepted ways to do animation:

  1. Keyframe
  2. Keframe with Interpolation
  3. Skeletal Animation (hardware skinning)

Keyframe

Keyframe animation is when you create a separate model for each frame of the animation, similar to a 2D sprite sheet. You render the models in succession to produce the animation. This method is probably the simplest, but most naive implementation, since you need to load multiple models for each animation. The transitions between these frames may be noticeable depending on how many frames you produce and you may need to export several model files before it looks acceptable. Another downside to this method is that you would likely need to produce your own models.

Keyframe with Interpolation

This method is similar to the above however rather than producing each key frame as a separate model, just a few key frames are produced and the "missing" models are produced with the engine using interpolation. We can do this because if we know the starting point of a vertex and the ending point of a vertex we can interpolate to find out where the vertex should be at time = t.

This tutorial does a great job of explaining how to do key frame animation:

https://www.khronos.org/opengl/wiki/Keyframe_Animation

Again, it doesn't talk about Assimp but the concepts are the same, and you can still use Assimp to load your models. This form of animation is fairly simple to implement and is quite good for a beginner. However it does come with some drawbacks. If you choose to go this route you may be limited by memory as this method can consume a lot of memory with VBO's, this will depend on how detailed your models are. If you choose to create your own models you will also want to preserve the vertex order in the model files so that interpolating between vertex 2 of one model file (key frame 1) to vertex 2 of another model file (key frame 2) will be correct.

Skeletal Animation

This is probably the most difficult way to do animation but it deals with a lot of the issues in method 1 and 2. You'll also find that by doing skeletal animation you are able to load a lot of the newer file formats; those that specify bone transformation and rotations rather than having to load new files for each key frame.

This is one case where I think having Assimp will be of great benefit. Assimp is very well equipped to deal with getting the data you need out of the model file to do skeletal animation.

If you are interested in doing skeletal animation this tutorial is a fantastic way to go about it, and even uses Assimp as well.

http://ogldev.atspace.co.uk/www/tutorial38/tutorial38.html

I used this tutorial myself to achieve skeletal animation in my own engine. So I strongly encourage you to read this if you decide to go down that path.

The last thing I will mention that I've noticed confuses some people is that animation can be done using hardware acceleration, but it's not strictly necessary.

In the previous tutorial links I provided, both of these are doing animation using hardware acceleration. In this context, it means that the vertex computations are being done on the GPU in the body of the vertex shader.

However I know that a lot people may not be familiar with modern OpenGL in which case you can still do these same calculations on the CPU. The idea here being be to look at what is happening in the vertex shader and create a function that performs those calculations for you.

You also asked about file formats for animation; this is going to depend what route you take to do the animation. If you want to do animation for formats like .fbx and .md5 you will likely be doing skeletal animation. If you go for key frame animation I would probably stick with .obj, this is the format I find the easiest to work with as the format specification is quite easy to understand.

While you're debugging the animation in your engine make sure you have a file which you know works; another pitfall is that downloading free models on the internet can contain any old format, absolute texture paths and different coordinate systems (Y is up or Z is up) etc.

这篇关于如何使用Assimp获取模型动画?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆