同步相机预览和处理 [英] Simultaneous camera preview and processing

查看:17
本文介绍了同步相机预览和处理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在设计一个具有 OpenGL 处理管道(着色器集合)并同时要求最终用户查看未处理相机预览的应用程序.

I'm designing an application that has a OpenGL processing pipeline (collection of shaders) and simultaneously requires the end user to see the unprocessed camera preview.

举例来说,假设您想向用户显示相机预览,同时计算从相机接收到的场景中红色对象的数量,但您使用的任何着色器都可以计算对象,例如用户不应该看到色调过滤等.

For the sake of example, suppose you want to show the user the camera preview and at the same time count the number of red objects in the scenes you receive from the camera, but any shaders you utilize to count the objects such as hue filtering, etc. should not be seen by the user.

我该如何正确设置?

我知道我可以设置摄像头预览,然后在回调中接收 YUV 格式的摄像头帧数据,然后将其转储到 OpenGL 纹理中并以这种方式处理帧,但是这会带来性能问题.我必须将数据从相机硬件往返传输到 VM,然后将其传递回 GPU 内存.我正在使用 SurfaceTexture 直接以 OpenGL 可理解的格式从相机获取数据,并将其传递给我的着色器以解决此问题.

I know I can setup a camera preview and then on the callback receive camera frame data in YUV format, then dump that into an OpenGL texture and process the frame that way, however, that has performance problems associated with it. I have to roundtrip the data from the camera hardware to the VM, then pass it back to the GPU memory. I'm using SurfaceTexture to get the data from the camera directly in OpenGL understandable format and pass that to my shaders to solve this issue.

我以为我可以向最终用户显示相同的未处理 SurfaceTexture,但 TextureView 没有构造函数或设置器,我可以将它传递给SurfaceTexture 我希望它渲染.它总是创建自己的.

I thought I'd be able to show that same unprocessed SurfaceTexture to the end user, but TextureView does not have a constructor or a setter where I can pass it the SurfaceTexture I want it to render. It always creates its own.

这是我当前设置的概述:

This is an overview of my current setup:

  • GLRenderThread:此类从 Thread 扩展而来,设置 OpenGL 上下文、显示等,并使用 SurfaceTexture 作为表面(eglCreateWindowSurface 的第三个参数).
  • GLFilterChain:对输入纹理执行检测的着色器集合.
  • Camera:使用单独的 SurfaceTexture 作为 GLFilterChain 的输入并抓取相机的预览
  • 最后是一个显示 GLRenderThread 的 SurfaceTexture 的 TextureView

显然,通过这种设置,我向用户展示了处理后的帧,这不是我想要的.此外,帧的处理不是实时的.基本上,我通过链运行来自相机的输入一次,一旦完成所有过滤器,我调用 updateTexImage 从相机中抓取下一帧.我在 Nexus 4 上的处理速度约为每秒 10 帧.

Obviously, with this setup, I'm showing the processed frames to the user which is not what I want. Further, the processing of the frames is not real-time. Basically, I run the input from Camera through the chain once and once all filters are done, I call updateTexImage to grab the next frame from the Camera. My processing is around 10 frames per second on Nexus 4.

我觉得我可能需要使用 2 个 GL 上下文,一个用于实时预览,一个用于处理,但我不确定.我希望有人能把我推向正确的方向.

I feel that I probably need to use 2 GL contexts, one for real-time preview and one for processing, but I'm not certain. I'm hoping someone can push me in the right direction.

推荐答案

除非你的处理速度比实时慢,那么答案很简单:保持原始相机纹理不变,将处理后的图像计算为不同的纹理并在单个 GLView 中并排显示给用户.保持一个线程,因为所有的处理都发生在 GPU 上.多线程只会使问题复杂化.

Unless your processing runs slower than real time, then the answer is a simple one: just keep the original camera texture untouched, calculate the processed image to a different texture and display both to the user, side by side in a single GLView. Keep a single thread, as all the processing happens on the GPU anyway. Multiple threads only complicate matters here.

处理步骤的数量并不重要,因为可以有任意数量的中间纹理(另见乒乓)永远不会显示给用户 - 没有人强迫你这样做.

The number of processing steps does not really matter, as there can be arbitrary number of intermediate textures (also see ping-ponging) that are never displayed to the user - no one and nothing is forcing you to.

实时的概念在这里可能令人困惑.只需将帧视为不可分割的时间快照.通过这样做,您将忽略图像从相机到屏幕所需的延迟,但如果您可以将其保持在交互式帧速率(例如每秒至少 20 帧),那么这主要是忽略.

The notion of real time is probably confusing here. Just think of a frame as an undivisible time snapshot. By doing so, you will ignore the delay that it takes for the image to go from the camera to the screen, but if you can keep it at interactive frame rates (such as at least 20 frames per second), then this can mostly be ignored.

另一方面,如果您的处理速度要慢得多,您需要做出选择:在摄像头馈送中引入延迟并仅每 N 帧处理一次,或者交替实时显示每个摄像头帧并让下一个处理帧滞后.为此,您可能需要两个单独的渲染上下文来启用异步处理,这在 Android 上可能很难做到(或者可能就像创建第二个 GLView 一样简单,因为您可以在上下文之间没有数据共享的情况下生活).

On the other hand, if your processing is much slower, you need to make a choice between introducing a delay in the camera feed and process only every Nth frame, or alternately display every camera frame in real time and let the next processed frame lag behind. To do that, you would probably need two separate rendering contexts to enable asynchronous processing, which might be potentially hard to do on Android (or maybe just as simple as creating a second GLView, since you can live without data sharing between the contexts).

这篇关于同步相机预览和处理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆