应用自定义过滤器摄像机输出 [英] Apply custom filters to camera output

查看:91
本文介绍了应用自定义过滤器摄像机输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我如何申请自定义过滤器中的摄像机输出单帧,并告诉他们。

How do I apply custom filters to single frames in the camera output, and show them.

我试过到目前为止:

mCamera.setPreviewCallback(new CameraGreenFilter());

public class CameraGreenFilter implements PreviewCallback {

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        final int len = data.length;
        for(int i=0; i<len; ++i){
            data[i] *= 2;
        }
    }
}

  • 虽然它的名称中包含绿色其实我是想只需要修改的值以某种方式(在这种情况下,颜色会被强化了一下)。长话短说,这是行不通的。

    • Although its name contains "green" I actually want to just modify the values somehow (in this case, colors would be intensified a bit). Long story short, it does not work.

      我想通了,字节数组'数据'是摄像机输出的复印件;但是这并不能真正帮助,因为我需要'真正的'缓冲区。

      I figured out that the byte array 'data' is a copy of the camera output; but this doesn't really help, because I need the 'real' buffer.

      我听说,你可以用OpenGL实现这一点。这听起来很复杂。

      I've heard you could implement this with openGL. That sounds very complicated.

      有没有更简单的方法?否则,怎么会有这样的OpenGL surfaceView测绘工作?

      Is there an easier way? Else, how would this openGL-surfaceView mapping work?

      推荐答案

      确定,有几种方法可以做到这一点。但存在与性能显著问题。来自摄像头的字节[]是在 YUV格式时,其具有转换为某种RGB格式,如果要显示它。这种转换是相当昂贵的操作,显著降低输出FPS。

      OK, there are several ways to do this. But there is a significant problem with performance. The byte[] from a camera is in YUV format, which has to be converted to some sort of RGB format, if you want to display it. This conversion is quite expensive operation and significantly lowers the output fps.

      这取决于你真正想要做的摄像头preVIEW。因为最好的解决办法是绘制相机preVIEW没有回调,并提出一些效果在照相机preVIEW。这是通常的方式做argumented reallity的东西。

      It depends on what you actually want to do with the camera preview. Because the best solution is to draw the camera preview without callback and make some effects over the camera preview. That is the usual way to do argumented reallity stuff.

      但如果你真的需要手动显示输出,有几种方法可以做到这一点。您的例子不有几个原因的工作。首先,你不显示图像的。如果调用此:

      But if you really need to display the output manually, there are several ways to do that. Your example does not work for several reasons. First, you are not displaying the image at all. If you call this:

      mCamera.setPreviewCallback(new CameraGreenFilter());
      mCamera.setPreviewDisplay(null);
      

      不是你的相机没有显示preVIEW的一切,你必须手动显示。而且你不能在previewFrame方法做任何昂贵的操作中,东阳数据的生命周期是有限的,它的自动覆盖下一帧。一个提示,使用<一个href="http://developer.android.com/reference/android/hardware/Camera.html#set$p$pviewCallbackWithBuffer%28android.hardware.Camera.$p$pviewCallback%29">set$p$pviewCallbackWithBuffer,它的速度更快,因为它重用一个缓冲器并不必在每帧分配新的内存。

      than your camera is not displaying preview at all, you have to display it manually. And you can't do any expensive operations in onPreviewFrame method, beacause the lifetime of data is limited, it's overwriten on the next frame. One hint, use setPreviewCallbackWithBuffer, it's faster, because it reuses one buffer and does not have to allocate new memory on each frame.

      所以,你必须做这样的事情:

      So you have to do something like this:

      private byte[] cameraFrame;
      private byte[] buffer;
      @Override
      public void onPreviewFrame(byte[] data, Camera camera) {
          cameraFrame = data;
          addCallbackBuffer(data); //actually, addCallbackBuffer(buffer) has to be called once sowhere before you call mCamera.startPreview();
      }
      
      
      private ByteOutputStream baos;
      private YuvImage yuvimage;
      private byte[] jdata;
      private Bitmap bmp;
      private Paint paint;
      
      @Override //from SurfaceView
      public void onDraw(Canvas canvas) {
          baos = new ByteOutputStream();
          yuvimage=new YuvImage(cameraFrame, ImageFormat.NV21, prevX, prevY, null);
      
          yuvimage.compressToJpeg(new Rect(0, 0, width, height), 80, baos); //width and height of the screen
          jdata = baos.toByteArray();
      
          bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
      
          canvas.drawBitmap(bmp , 0, 0, paint);
          invalidate(); //to call ondraw again
      }
      

      要完成这项工作,你需要调用<一个href="http://developer.android.com/reference/android/view/View.html#setWillNotDraw%28boolean%29">setWillNotDraw(false)在类的构造函数或其他地方。

      To make this work, you need to call setWillNotDraw(false) in the class constructor or somewhere.

      在OnDraw中,例如,你可以使用<一个href="http://developer.android.com/reference/android/graphics/Paint.html#setColorFilter%28android.graphics.ColorFilter%29">paint.setColorFilter(filter),如果你想修改颜色。我可以张贴一些例子,如果你想要的。

      In onDraw, you can for example apply paint.setColorFilter(filter), if you want to modify colors. I can post some example of that, if you want.

      因此​​,这将正常工作,但性能会很低(小于8FPS),导致BitmapFactory.de codeByteArray缓慢。您可以尝试将数据从YUV转换为RGB与本地code和Android NDK,而这是相当复杂的。

      So this will work, but the performance will be low (less than 8fps), cause BitmapFactory.decodeByteArray is slow. You can try to convert data from YUV to RGB with native code and android NDK, but that's quite complicated.

      另一种选择是使用OpenGL ES。你需要GLSurfaceView,在那里你画幅相机绑定作为纹理(在GLSurfaceView实现摄像头。previewCallback,让您在previewFrame同样的方式使用在规则的表面)。但有同样的问题,你需要转换YUV数据。还有一个机会 - 可以显示从preVIEW(灰度图像)相当快只有亮度数据,因为字节数组的YUV上半年只有亮度数据没有颜色。因此,对previewFrame使用arraycopy复制的第一个数组的一半,比你绑定的纹理是这样的:

      The other option is to use openGL ES. You need GLSurfaceView, where you bind camera frame as a texture (in GLSurfaceView implement Camera.previewCallback, so you use onPreviewFrame same way as in regular surface). But there is the same problem, you need to convert YUV data. There is one chance - you can display only luminance data from the preview (greyscale image) quite fast, because the first half of byte array in YUV is only luminance data without colors. So on onPreviewFrame you use arraycopy to copy the first half of the array, and than you bind the texture like this:

      gl.glGenTextures(1, cameraTexture, 0);
      int tex = cameraTexture[0];
      gl.glBindTexture(GL10.GL_TEXTURE_2D, tex);
      gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, 
          this.prevX, this.prevY, 0, GL10.GL_LUMINANCE, 
          GL10.GL_UNSIGNED_BYTE, ByteBuffer.wrap(this.cameraFrame)); //cameraFrame is the first half od byte[] from onPreviewFrame
      
      gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
      

      您不能得到16-18 FPS这种方式,你可以使用OpenGL做一些过滤器。如果你想我可以给你更多的code到这一点,但它太长时间摆在这里...

      You cant get about 16-18 fps this way and you can use openGL to make some filters. I can send you some more code to this if you want, but it's too long to put in here...

      有关更多信息,你可以看到我的<一个href="http://stackoverflow.com/questions/8350230/android-how-to-display-camera-$p$pview-with-callback">simillar问题,但没有一个好的解决方案...

      For some more info, you can see my simillar question, but there is not a good solution either...

      这篇关于应用自定义过滤器摄像机输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆