将自定义过滤器应用于相机输出 [英] Apply custom filters to camera output

查看:22
本文介绍了将自定义过滤器应用于相机输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何将自定义滤镜应用于相机输出中的单个帧,并显示它们.

How do I apply custom filters to single frames in the camera output, and show them.

到目前为止我尝试过的:

What I've tried so far:

mCamera.setPreviewCallback(new CameraGreenFilter());

public class CameraGreenFilter implements PreviewCallback {

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        final int len = data.length;
        for(int i=0; i<len; ++i){
            data[i] *= 2;
        }
    }
}

  • 虽然它的名字包含green",但我实际上只想以某种方式修改这些值(在这种情况下,颜色会稍微加强一点).长话短说,它不起作用.

    • Although its name contains "green" I actually want to just modify the values somehow (in this case, colors would be intensified a bit). Long story short, it does not work.

      我发现字节数组data"是相机输出的副本;但这并没有真正的帮助,因为我需要真正的"缓冲区.

      I figured out that the byte array 'data' is a copy of the camera output; but this doesn't really help, because I need the 'real' buffer.

      我听说你可以用 openGL 实现这个.这听起来很复杂.

      I've heard you could implement this with openGL. That sounds very complicated.

      有没有更简单的方法?否则,这个 openGL-surfaceView 映射将如何工作?

      Is there an easier way? Else, how would this openGL-surfaceView mapping work?

      推荐答案

      好的,有几种方法可以做到这一点.但是在性能方面存在重大问题.来自相机的字节 [] 采用 YUV 格式,必须转换为某种格式RGB 格式,如果你想显示它.这种转换是非常昂贵的操作,并且显着降低了输出 fps.

      OK, there are several ways to do this. But there is a significant problem with performance. The byte[] from a camera is in YUV format, which has to be converted to some sort of RGB format, if you want to display it. This conversion is quite expensive operation and significantly lowers the output fps.

      这取决于您真正想用相机预览做什么.因为最好的解决方案是在不回调的情况下绘制相机预览并在相机预览上制作一些效果.这是做有争议的现实事情的常用方法.

      It depends on what you actually want to do with the camera preview. Because the best solution is to draw the camera preview without callback and make some effects over the camera preview. That is the usual way to do argumented reallity stuff.

      但如果你真的需要手动显示输出,有几种方法可以做到这一点.由于多种原因,您的示例不起作用.首先,您根本没有显示图像.如果你这样调用:

      But if you really need to display the output manually, there are several ways to do that. Your example does not work for several reasons. First, you are not displaying the image at all. If you call this:

      mCamera.setPreviewCallback(new CameraGreenFilter());
      mCamera.setPreviewDisplay(null);
      

      因为您的相机根本不显示预览,您必须手动显示它.而且你不能在 onPreviewFrame 方法中做任何昂贵的操作,因为数据的生命周期是有限的,它会在下一帧被覆盖.一个提示,使用 setPreviewCallbackWithBuffer,它更快,因为它重用了一个缓冲区并且不必在每一帧上分配新的内存.

      than your camera is not displaying preview at all, you have to display it manually. And you can't do any expensive operations in onPreviewFrame method, beacause the lifetime of data is limited, it's overwriten on the next frame. One hint, use setPreviewCallbackWithBuffer, it's faster, because it reuses one buffer and does not have to allocate new memory on each frame.

      所以你必须做这样的事情:

      So you have to do something like this:

      private byte[] cameraFrame;
      private byte[] buffer;
      @Override
      public void onPreviewFrame(byte[] data, Camera camera) {
          cameraFrame = data;
          addCallbackBuffer(data); //actually, addCallbackBuffer(buffer) has to be called once sowhere before you call mCamera.startPreview();
      }
      
      
      private ByteOutputStream baos;
      private YuvImage yuvimage;
      private byte[] jdata;
      private Bitmap bmp;
      private Paint paint;
      
      @Override //from SurfaceView
      public void onDraw(Canvas canvas) {
          baos = new ByteOutputStream();
          yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);
      
          yuvimage.compressToJpeg(new Rect(0, 0, width, height), 80, baos); //width and height of the screen
          jdata = baos.toByteArray();
      
          bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
      
          canvas.drawBitmap(bmp , 0, 0, paint);
          invalidate(); //to call ondraw again
      }
      

      要完成这项工作,您需要调用 setWillNotDraw(false) 在类构造函数中或某处.

      To make this work, you need to call setWillNotDraw(false) in the class constructor or somewhere.

      在onDraw中,你可以例如应用paint.setColorFilter(filter),如果你想修改颜色.如果您愿意,我可以发布一些示例.

      In onDraw, you can for example apply paint.setColorFilter(filter), if you want to modify colors. I can post some example of that, if you want.

      所以这会起作用,但性能会很低(低于 8fps),导致 BitmapFactory.decodeByteArray 很慢.您可以尝试使用本机代码和android NDK 将数据从YUV 转换为RGB,但这相当复杂.

      So this will work, but the performance will be low (less than 8fps), cause BitmapFactory.decodeByteArray is slow. You can try to convert data from YUV to RGB with native code and android NDK, but that's quite complicated.

      另一种选择是使用 openGL ES.您需要 GLSurfaceView,在其中将相机框架绑定为纹理(在 GLSurfaceView 中实现 Camera.previewCallback,因此您可以像在常规表面中一样使用 onPreviewFrame).但是有同样的问题,需要转换YUV数据.有一个机会 - 您可以非常快地仅显示预览(灰度图像)中的亮度数据,因为 YUV 中字节数组的前半部分只是没有颜色的亮度数据.因此,在 onPreviewFrame 上,您使用 arraycopy 复制数组的前半部分,然后像这样绑定纹理:

      The other option is to use openGL ES. You need GLSurfaceView, where you bind camera frame as a texture (in GLSurfaceView implement Camera.previewCallback, so you use onPreviewFrame same way as in regular surface). But there is the same problem, you need to convert YUV data. There is one chance - you can display only luminance data from the preview (greyscale image) quite fast, because the first half of byte array in YUV is only luminance data without colors. So on onPreviewFrame you use arraycopy to copy the first half of the array, and than you bind the texture like this:

      gl.glGenTextures(1, cameraTexture, 0);
      int tex = cameraTexture[0];
      gl.glBindTexture(GL10.GL_TEXTURE_2D, tex);
      gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, 
          this.prevX, this.prevY, 0, GL10.GL_LUMINANCE, 
          GL10.GL_UNSIGNED_BYTE, ByteBuffer.wrap(this.cameraFrame)); //cameraFrame is the first half od byte[] from onPreviewFrame
      
      gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
      

      您无法通过这种方式获得大约 16-18 fps,您可以使用 openGL 制作一些过滤器.如果您愿意,我可以向您发送更多代码,但在这里放得太长了...

      You cant get about 16-18 fps this way and you can use openGL to make some filters. I can send you some more code to this if you want, but it's too long to put in here...

      有关更多信息,您可以查看我的类似问题,但也没有很好的解决方案...

      For some more info, you can see my simillar question, but there is not a good solution either...

      这篇关于将自定义过滤器应用于相机输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆