机器人的ffmpeg的OpenGL ES渲染电影 [英] android ffmpeg opengl es render movie

查看:423
本文介绍了机器人的ffmpeg的OpenGL ES渲染电影的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想通过NDK将视频,补充一点,只是不支持SDK中的某些功能。我使用FFmpeg的脱$ C C视频$并且可以编译通过NDK,并用作为起点。我已经修改,而不是使用glDrawTexiOES绘制纹理我已经安装了一些顶点,正在呈现最重要的是纹理(渲染四边形的OpenGL ES的方式)的例子,。

下面是我在做什么来渲染,而是创造了glTexImage2D缓慢。我想知道是否有什么办法可以加快这,还是放弃加速这件事,如想安装一些纹理的背景的外观和渲染pre-设置纹理。或者,如果有任何其他方式更快地绘制视频帧到屏幕的Andr​​oid?目前我只能得到约12fps的。

  glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D,textureConverted);

//这是缓慢的
glTexImage2D(GL_TEXTURE_2D,/ * *目标/
0,/ *等级* /
GL_RGBA,/ *内部格式* /
textureWidth,/ *宽* /
textureHeight,/ *高* /
0,/ *边界* /
GL_RGBA,/ *格式* /
GL_UNSIGNED_BYTE,/ *类型* /
pFrameConverted->数据[0]);

glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2,GL_FLOAT,0,texCoords);
的glVertexPointer(3,GL_FLOAT,0,顶点);
与glDrawElements(GL_TRIANGLES,6,GL_UNSIGNED_BYTE,索引);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
 

修改 我改变了我的code初始化gltextImage2D只有一次,并修改它与glSubTexImage2D,它并没有多大的改善的帧率。

然后我修改了code修改本机位图对象的NDK。通过这种方法我有一个运行过程中的下一个框架和填充在本机端的位图对象后台线程。我觉得这个有潜力的,但我需要的速度增加,从FFmpeg的的AVFrame对象转换成原始位图。下面是目前我使用的是什么转换,蛮力的方法。有没有什么办法可以增加这个速度或优化这种转换?

 静态无效fill_bitmap(AndroidBitmapInfo *信息,无效*像素,AVFrame * PFRAME)
{
uint8_t有* frameLine;

INT YY;
对于(YY = 0; YY<信息 - >高度; YY ++){
    uint8_t有*线=(uint8_t有*)像素;
    frameLine =(uint8_t有*)pFrame->数据[0] +(YY * pFrame-> LINESIZE [0]);

    INT XX;
    对于(XX = 0; XX<信息 - >宽度; XX ++){
        INT out_offset = XX * 4;
        INT in_offset = XX * 3;

        行[out_offset] = frameLine [in_offset]
        线[out_offset + 1] = frameLine [in_offset + 1];
        行[out_offset + 2] = frameLine [in_offset + 2];
        线[out_offset + 3] = 0;
    }
    像素=(字符*)像素+信息 - >步幅;
}
}
 

解决方案

是的,质地(和缓冲区,着色器,和帧缓存)的创建是缓慢的。

这就是为什么你要创建的纹理只有一次。在创建它后,你可以通过调用<一个修改它的数据href="http://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexSubImage2D.xml">glSubTexImage2D.

和制作上传纹理数据来得更快 - 创建两个纹理。当你使用一个显示,从FFMPEG到第二个上传纹理数据。当显示第二个,上传数据到第一位。而从开始重复。

我觉得它仍然是不是很快。你可以尝试使用jnigraphics库,允许从NDK访问位图对象像素。在此之后 - 你只是diplay这个位图在屏幕上的Java端

I am trying to render video via the NDK, to add some features that just aren't supported in the sdk. I am using FFmpeg to decode the video and can compile that via the ndk, and used this as a starting point. I have modified that example and instead of using glDrawTexiOES to draw the texture I have setup some vertices and am rendering the texture on top of that (opengl es way of rendering quad).

Below is what I am doing to render, but creating the glTexImage2D is slow. I want to know if there is any way to speed this up, or give the appearance of speeding this up, such as trying to setup some textures in the background and render pre-setup textures. Or if there is any other way to more quickly draw the video frames to screen in android? Currently I can only get about 12fps.

glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, textureConverted);

//this is slow
glTexImage2D(GL_TEXTURE_2D, /* target */
0, /* level */
GL_RGBA, /* internal format */
textureWidth, /* width */
textureHeight, /* height */
0, /* border */
GL_RGBA, /* format */
GL_UNSIGNED_BYTE,/* type */
pFrameConverted->data[0]);

glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);

EDIT I changed my code to initialize a gltextImage2D only once, and modify it with glSubTexImage2D, it didn't make much of an improvement to the framerate.

I then modified the code to modify a native Bitmap object on the NDK. With this approach I have a background thread that runs that process the next frames and populates the bitmap object on the native side. I think this has potential, but I need to get the speed increased of converting the AVFrame object from FFmpeg into a native bitmap. Below is currently what I am using to convert, a brute force approach. Is there any way to increase the speed of this or optimize this conversion?

static void fill_bitmap(AndroidBitmapInfo*  info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine;

int  yy;
for (yy = 0; yy < info->height; yy++) {
    uint8_t*  line = (uint8_t*)pixels;
    frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]);

    int xx;
    for (xx = 0; xx < info->width; xx++) {
        int out_offset = xx * 4;
        int in_offset = xx * 3;

        line[out_offset] = frameLine[in_offset];
        line[out_offset+1] = frameLine[in_offset+1];
        line[out_offset+2] = frameLine[in_offset+2];
        line[out_offset+3] = 0;
    }
    pixels = (char*)pixels + info->stride;
}
}

解决方案

Yes, texture (and buffer, and shader, and framebuffer) creation is slow.

That's why you should create texture only once. After it is created, you can modify its data by calling glSubTexImage2D.

And to make uploading texture data more faster - create two textures. While you use one to display, upload texture data from ffmpeg to second one. When you display second one, upload data to first one. And repeat from beginning.

I think it will still be not very fast. You could try to use jnigraphics library that allows to access Bitmap object pixels from NDK. After that - you just diplay this Bitmap on screen on java side.

这篇关于机器人的ffmpeg的OpenGL ES渲染电影的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆