使用OpenGL将每个相机帧处理为位图 [英] Process every camera frame as Bitmap with OpenGL

查看:83
本文介绍了使用OpenGL将每个相机帧处理为位图的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个应用程序,我想在其中处理相机中的每个给定帧,以做一些ARCore的工作.所以我有一个实现GLSurfaceView.Renderer的类,在这个类中我有onDrawFrame(GL10 gl)方法.在这种方法中,我想使用Android位图,因此我调用以下代码从当前帧中获取位图:

I have an app, where I want to process every given frame from the camera to do some ARCore stuff. So I have a class implementing GLSurfaceView.Renderer, and in this class I have the onDrawFrame(GL10 gl) method. In this method, I want to work with an Android bitmap, so I call this code to get a bitmap from the current frame:

private Bitmap getTargetImageBitmapOpenGL(int cx, int cy, int w, int h) {
    try {

      if (currentTargetImageBitmap == null) {
        currentTargetImageBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);

        byteBuffer = ByteBuffer.allocateDirect(w * h * 4);
        byteBuffer.order(ByteOrder.nativeOrder());
      }

      // cy = height - cy;

      if ((cx + w / 2) > width) {
        Log.e(TAG, "TargetImage CenterPoint invalid A: " + cx + " " + cy);
        cx = width - w / 2;
      }

      if ((cx - w / 2) < 0) {
        Log.e(TAG, "TargetImage CenterPoint invalid B: " + cx + " " + cy);
        cx = w / 2;
      }

      if ((cy + h / 2) > height) {
        Log.e(TAG, "TargetImage CenterPoint invalid C: " + cx + " " + cy);
        cy = height - h / 2;
      }

      if ((cy - h / 2) < 0) {
        Log.e(TAG, "TargetImage CenterPoint invalid D: " + cx + " " + cy);
        cy = h / 2;
      }

      int x = cx - w / 2;
      int y = cy - h / 2;

      GLES20.glReadPixels(x, y, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE,
          byteBuffer);

      IntBuffer currentTargetImagebuffer = byteBuffer.asIntBuffer();

      currentTargetImagebuffer.rewind();
      currentTargetImageBitmap.copyPixelsFromBuffer(currentTargetImagebuffer);

      return currentTargetImageBitmap;

    } catch (Exception e) {
      e.printStackTrace();
    }

    return null;
  }

此方法大约需要90毫秒,这绝对太慢,无法实时处理每个传入的帧,这是我需要做的,因为onDrawFrame(GL10 gl)方法还将此帧绘制到表面视图.知道为什么这么慢吗?如果我只能读取其他每一帧的像素,但将每一帧绘制到我的SurfaceView,也就足够了.我试图在AsyncTask.execute()中调用显示的方法,但是另一个线程无法通过GLES20.glReadPixels()方法读取,因为它不是GL线程.

This method takes around 90 ms, which is definitely too slow to process every incoming frame in realtime, which I need to do because the onDrawFrame(GL10 gl) method also draws this frame to a surface view. Any idea why this is so slow? It would also suffice if I could only read the pixels of every other frame, but draw every frame to my SurfaceView. I tried to call the shown method in AsyncTask.execute(), but another thread cannot read via the GLES20.glReadPixels() method, since it is not the GL thread.

推荐答案

许多现代GPU都可以对YUV进行本地解码.问题是如何使YUV表面进入OpenGL ES,因为这通常不是Open GL ES要做的.大多数操作系统(包括Android)可让您通过EGL_image_external扩展名将外部表面直接导入OpenGL ES,并且可以使用自动颜色转换将这些外部表面标记为YUV.

A lot of the modern GPUs can decode YUV natively; the issue is how to get the YUV surface into OpenGL ES as this is not normally something which Open GL ES does. Most operating systems (Android included) let you import external surfaces directly into OpenGL ES via the EGL_image_external extension, and these external surfaces can be marked up a being YUV with automatic color conversion.

更好的是,所有这些都是零副本;相机缓冲区可以直接由GPU导入和访问.

Even better this is all handled zero-copy; the camera buffer can be directly imported and accessed by the GPU.

此Android导入机制通过SurfaceTexture类进行,此处描述了必要的用法:

This Android mechanism for importing is via the SurfaceTexture class, and the necessary usage is described here: https://source.android.com/devices/graphics/arch-st

这篇关于使用OpenGL将每个相机帧处理为位图的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆