Android Camera2 API显示已处理的预览图像 [英] Android Camera2 API Showing Processed Preview Image

查看:530
本文介绍了Android Camera2 API显示已处理的预览图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

新的Camera 2 API与旧的API截然不同.向管道的用户部分显示操纵的相机框架会让我感到困惑.我知道相机预览上有很好的解释使用Android L和Camera2 API的图像数据处理,但显示帧仍不清楚.我的问题是,经过一些处理后又在保留Camera2 api管道的效率和速度的同时,如何在屏幕上显示来自ImageReaders回调函数的帧?

New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?

示例流程:

camera.add_target(imagereader.getsurface)->在imagereaders回调上进行一些处理->(在屏幕上显示已处理的图像?)

camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)

解决方法:每次处理新帧时,将位图发送到imageview.

Workaround Idea : Sending bitmaps to imageview every time new frame processed.

推荐答案

在澄清问题后进行编辑;底部的原始答案

Edit after clarification of the question; original answer at bottom

取决于您在哪里进行处理.

Depends on where you're doing your processing.

如果使用的是RenderScript,则可以将Surface从SurfaceView或TextureView连接到Allocation(使用 HDR Viewfinder演示使用这种方法.

If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.

如果您正在执行基于EGL着色器的处理,则可以使用

If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.

如果您正在执行本机处理,则可以使用NDK ANativeWindow方法写入从Java传递的表面,然后转换到ANativeWindow.

If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.

如果您正在执行Java级处理,那确实很慢,并且您可能不想这样做.但是可以使用新的Android M ImageWriter 类,或每帧将纹理上载到EGL.

If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.

或者您说过,每帧绘制一个ImageView,但这会很慢.

Or as you say, draw to an ImageView every frame, but that'll be slow.

原始答案:

如果要捕获JPEG图像,则只需将ByteBuffer的内容从Image.getPlanes()[0].getBuffer()复制到byte[],然后使用BitmapFactory.decodeByteArray将其转换为位图.

If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte[], and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.

如果要捕获YUV_420_888图像,则需要将自己的转换代码从3平面YCbCr 4:2:0格式写入可以显示的内容,例如RGB值的int []以创建位图从;不幸的是,目前还没有方便的API.

If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int[] of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.

如果要捕获RAW_SENSOR图像(Bayer模式未处理的传感器数据),则需要进行大量图像处理或仅保存DNG.

If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.

这篇关于Android Camera2 API显示已处理的预览图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆