同时相机preVIEW加工 [英] Simultaneous camera preview and processing

查看:180
本文介绍了同时相机preVIEW加工的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我设计,有一个OpenGL的处理管道(着色器集合),同时应用程序需要最终用户看到的未处理的摄像头preVIEW。

I'm designing an application that has a OpenGL processing pipeline (collection of shaders) and simultaneously requires the end user to see the unprocessed camera preview.

例如起见,假设你要显示给用户的摄像头preVIEW,并在同一时间算在你从摄像头接收的场景红色物体的数量,但你使用任何着色器来算如色调滤波等的对象不应该被用户看到。

For the sake of example, suppose you want to show the user the camera preview and at the same time count the number of red objects in the scenes you receive from the camera, but any shaders you utilize to count the objects such as hue filtering, etc. should not be seen by the user.

我将如何去正确地设置此功能?

How would I go about setting this up properly?

我知道我可以设置相机preVIEW,然后在回调接收摄像头帧数据YUV格式,然后转储到一个OpenGL纹理处理框架的方式,但是,与它相关的性能问题。我要往返从相机硬件到虚拟机数据,然后传回给GPU内存。我使用表面纹理来获得直接在OpenGL理解的格式从相机中的数据并传递给我的着色器来解决这个问题。

I know I can setup a camera preview and then on the callback receive camera frame data in YUV format, then dump that into an OpenGL texture and process the frame that way, however, that has performance problems associated with it. I have to roundtrip the data from the camera hardware to the VM, then pass it back to the GPU memory. I'm using SurfaceTexture to get the data from the camera directly in OpenGL understandable format and pass that to my shaders to solve this issue.

我想我能够表现出同样的未处理表面纹理给最终用户,但 TextureView 做没有一个构造函数或setter方法​​,我可以通过它的表面纹理我希望它渲染。它总是创建自己的。

I thought I'd be able to show that same unprocessed SurfaceTexture to the end user, but TextureView does not have a constructor or a setter where I can pass it the SurfaceTexture I want it to render. It always creates its own.

这是我的当前设置的概述:

This is an overview of my current setup:


  • GLRenderThread:此类由螺纹延伸,设置OpenGL上下文,显示器等,并使用表面纹理作为表面(eglCreateWindowSurface的第3个参数)

  • GLFilterChain:上输入纹理进行检测着色器的集合

  • 摄像机:使用被用作GLFilterChain的输入,并抓住相机的preVIEW
  • 一个单独的表面纹理
  • 最后显示的GLRenderThread的一个表面纹理TextureView

  • GLRenderThread: this class extends from Thread, setups the OpenGL context, display, etc. and uses a SurfaceTexture as the surface (3rd parameter of eglCreateWindowSurface).
  • GLFilterChain: A collection of shaders that perform detection on the input texture.
  • Camera: Uses a separate SurfaceTexture which is used as the input of GLFilterChain and grabs the camera's preview
  • Finally a TextureView that displays the GLRenderThread's SurfaceTexture

显然,与此设置中,我显示了处理帧这不是我想要的用户。此外,该帧的处理不是实时的。基本上,我运行摄像机通过链条输入一次,一旦所有过滤器完成后,我打电话的 updateTexImage 抢从相机下一帧。我的加工大约是每秒10帧上的Nexus 4。

Obviously, with this setup, I'm showing the processed frames to the user which is not what I want. Further, the processing of the frames is not real-time. Basically, I run the input from Camera through the chain once and once all filters are done, I call updateTexImage to grab the next frame from the Camera. My processing is around 10 frames per second on Nexus 4.

我觉得我可能需要使用2 GL上下文,一个用于实时preVIEW和加工为一体,但我不能肯定。我希望有人可以把我在正确的方向。

I feel that I probably need to use 2 GL contexts, one for real-time preview and one for processing, but I'm not certain. I'm hoping someone can push me in the right direction.

推荐答案

除非你处理的运行速度比实际时间慢,那么答案很简单:只要保持原有的相机质地不变,计算处理后的图像以不同的纹理并在单个GLView同时显示给用户,并排。保持一个线程,因为所有的处理在GPU上发生反正。多线程只有在这里问题复杂化。

Unless your processing runs slower than real time, then the answer is a simple one: just keep the original camera texture untouched, calculate the processed image to a different texture and display both to the user, side by side in a single GLView. Keep a single thread, as all the processing happens on the GPU anyway. Multiple threads only complicate matters here.

的处理步骤的数量并不重要,因为可以有(也见乒乓),它们永远不会显示给用户中间纹理任意数量的 - 没有人,没有什么强迫你。

The number of processing steps does not really matter, as there can be arbitrary number of intermediate textures (also see ping-ponging) that are never displayed to the user - no one and nothing is forcing you to.

实时的概念可能是混淆了这里。试想一帧作为undivisible时间快照。通过这样做,你会忽略,它需要对图像从相机到屏幕去的延迟,但如果你能在交互式帧频保持(如至少每秒20帧),那么这可能主要是忽略。

The notion of real time is probably confusing here. Just think of a frame as an undivisible time snapshot. By doing so, you will ignore the delay that it takes for the image to go from the camera to the screen, but if you can keep it at interactive frame rates (such as at least 20 frames per second), then this can mostly be ignored.

在另一方面,如果你的处理要慢得多,你需要做出选择相机饲料和过程只有每个第N帧中引入的延迟之间,或交替实时显示每一个摄像头支架,让下一个处理框架落后。要做到这一点,你可能需要两个单独的渲染环境,使异步处理,这可能是潜在难以在Android上做(或者只是作为作为创建第二个GLView简单,因为你可以生活在没有上下文之间的数据共享)。

On the other hand, if your processing is much slower, you need to make a choice between introducing a delay in the camera feed and process only every Nth frame, or alternately display every camera frame in real time and let the next processed frame lag behind. To do that, you would probably need two separate rendering contexts to enable asynchronous processing, which might be potentially hard to do on Android (or maybe just as simple as creating a second GLView, since you can live without data sharing between the contexts).

这篇关于同时相机preVIEW加工的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆