Android camera2 jpeg帧率 [英] Android camera2 jpeg framerate

查看:567
本文介绍了Android camera2 jpeg帧率的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将具有固定帧速率(最好最多30个)的图像序列保存在具有全功能的camera2(Galaxy S7)的android设备上,但我无法a)获得稳定的帧速率,b)甚至达到20fps (使用jpeg编码).我已经包含了 Android camera2捕获突发太慢中的建议.

I am trying to save image sequences with fixed framerates (preferably up to 30) on an android device with FULL capability for camera2 (Galaxy S7), but I am unable to a) get a steady framerate, b) reach even 20fps (with jpeg encoding). I already included the suggestions from Android camera2 capture burst is too slow.

根据

characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputMinFrameDuration(ImageFormat.JPEG, size);

每种尺寸的停滞时间均为0ms(与YUV_420_888相似).

and the stallduration is 0ms for every size (similar for YUV_420_888).

我的捕获生成器如下所示:

My capture builder looks as follows:

captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CONTROL_AE_MODE_OFF);
captureBuilder.set(CaptureRequest.SENSOR_EXPOSURE_TIME, _exp_time);
captureBuilder.set(CaptureRequest.CONTROL_AE_LOCK, true);

captureBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, _iso_value);

captureBuilder.set(CaptureRequest.LENS_FOCUS_DISTANCE, _foc_dist);
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CONTROL_AF_MODE_OFF);

captureBuilder.set(CaptureRequest.CONTROL_AWB_MODE, _wb_value);

// https://stackoverflow.com/questions/29265126/android-camera2-capture-burst-is-too-slow
captureBuilder.set(CaptureRequest.EDGE_MODE,CaptureRequest.EDGE_MODE_OFF); 
captureBuilder.set(CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE, CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE_OFF);
captureBuilder.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_OFF);
captureBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);

// Orientation
int rotation = getWindowManager().getDefaultDisplay().getRotation();           
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION,ORIENTATIONS.get(rotation));

焦点距离设置为0.0(inf),iso设置为100,曝光时间为5ms.可以将白平衡"设置为关闭"/自动"/任意值",它不会影响下面的时间.

Focus distance is set to 0.0 (inf), iso is set to 100, exposure-time 5ms. Whitebalance can be set to OFF/AUTO/ANY VALUE, it does not impact the times below.

我使用以下命令启动捕获会话:

I start the capture session with the following command:

session.setRepeatingRequest(_capReq.build(), captureListener, mBackgroundHandler);

注意:如果我请求RepeatingRequest或RepeatingBurst没什么区别.

Note: It does not make a difference if I request RepeatingRequest or RepeatingBurst..

在预览中(仅附加了纹理表面),所有内容均为30fps. 但是,一旦我连接了一个图像读取器(在HandlerThread上运行的侦听器),该实例化的实例如下(不保存,仅测量帧之间的时间):

In the preview (only texture surface attached), everything is at 30fps. However, as soon as I attach an image reader (listener running on HandlerThread) which I instantiate like follows (without saving, only measuring time between frames):

reader = ImageReader.newInstance(_img_width, _img_height, ImageFormat.JPEG, 2);
reader.setOnImageAvailableListener(readerListener, mBackgroundHandler);

带有时间测量代码:

ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
    @Override
    public void onImageAvailable(ImageReader myreader) {
        Image image = null;

        image = myreader.acquireNextImage();
        if (image == null) {
            return;
        }
        long curr = image.getTimestamp();
        Log.d("curr- _last_ts", "" + ((curr - last_ts) / 1000000) + " ms");
        last_ts = curr;
        image.close();
    }
}

我会定期重复这样的时差:

I get periodically repeating time differences like this:

99毫秒-66毫秒-66毫秒-99毫秒-66毫秒-66毫秒...

99 ms - 66 ms - 66 ms - 99 ms - 66 ms - 66 ms ...

我不明白为什么这些方法花的时间是为jpeg宣传的流配置图的两倍或三倍?曝光时间远低于33ms的帧持续时间.还有其他我不知道的内部处理吗?

I do not understand why these take double or triple the time that the stream configuration map advertised for jpeg? The exposure time is well below the frame duration of 33ms. Is there some other internal processing happening that I am not aware of?

我尝试对YUV_420_888格式进行相同的操作,这导致33ms的恒定时差.我这里的问题是手机缺乏足够快的带宽来存储图像(我尝试了

I tried the same for the YUV_420_888 format, which resulted in constant time-differences of 33ms. The problem I have here is that the cellphone lacks the bandwidth to store the images fast enough (I tried the method described in How to save a YUV_420_888 image?). If you know of any method to compress or encode these images fast enough myself, please let me know.

摘自getOutputStallDuration的文档:换句话说,使用重复的YUV请求将导致稳定的帧速率(假设为30 FPS).如果定期提交单个JPEG请求,则帧速率将保持不变的速率为30 FPS(只要我们每次都等待之前的JPEG返回).如果我们尝试提交重复的YUV + JPEG请求,则帧速率将从30 FPS降低."这是否意味着我需要定期请求一个capture()?

From the documentation of getOutputStallDuration: "In other words, using a repeating YUV request would result in a steady frame rate (let's say it's 30 FPS). If a single JPEG request is submitted periodically, the frame rate will stay at 30 FPS (as long as we wait for the previous JPEG to return each time). If we try to submit a repeating YUV + JPEG request, then the frame rate will drop from 30 FPS." Does this imply that I need to periodically request a single capture()?

Edit2:来自 https://developer.android.com/reference/android/hardware/camera2/CaptureRequest.html :给定上述模型,应用程序的必要信息是通过使用getOutputMinFrameDuration(int,Size)的android.scaler.streamConfigurationMap字段提供的.确定给定流配置可能的最大帧速率/最小帧持续时间.

From https://developer.android.com/reference/android/hardware/camera2/CaptureRequest.html: "The necessary information for the application, given the model above, is provided via the android.scaler.streamConfigurationMap field using getOutputMinFrameDuration(int, Size). These are used to determine the maximum frame rate / minimum frame duration that is possible for a given stream configuration.

具体来说,应用程序可以使用以下规则来确定可以从摄像头设备请求的最小帧持续时间:

Specifically, the application can use the following rules to determine the minimum frame duration it can request from the camera device:

将一组当前配置的输入/输出流称为S. 通过使用getOutputMinFrameDuration(int,Size)(及其各自的大小/格式)在android.scaler.streamConfigurationMap中进行查找,找到S中每个流的最小帧持续时间.让这组帧持续时间称为F. 对于任何给定的请求R,R所允许的最小帧持续时间是F中所有值中的最大值.令R中使用的流称为S_r. 如果S_r中的所有流都没有停顿时间(使用其各自的大小/格式在getOutputStallDuration(int,Size)中列出),则F中的帧持续时间将确定应用程序使用R来获得的稳态帧率.重复的请求."

Let the set of currently configured input/output streams be called S. Find the minimum frame durations for each stream in S, by looking it up in android.scaler.streamConfigurationMap using getOutputMinFrameDuration(int, Size) (with its respective size/format). Let this set of frame durations be called F. For any given request R, the minimum frame duration allowed for R is the maximum out of all values in F. Let the streams used in R be called S_r. If none of the streams in S_r have a stall time (listed in getOutputStallDuration(int, Size) using its respective size/format), then the frame duration in F determines the steady state frame rate that the application will get if it uses R as a repeating request."

推荐答案

JPEG输出不是获取帧的最快方法.通过使用OpenGL将帧直接绘制到Quad上,可以更快地完成此操作.

The JPEG output is by way not the fastest way to fetch frames. You can accomplish this a lot faster by drawing the frames directly onto a Quad using OpenGL.

对于突发捕获,一种更快的解决方案是将图像捕获到RAM而不对其进行编码,然后进行异步编码和保存.

For burst capture, a faster solution would be capturing the images to RAM without encoding them, then encoding and saving them asynchronously.

在该网站上,您通常可以找到许多与android多媒体相关的出色代码.

On this website you can find a lot of excellent code related to android multimedia in general.

此特定程序使用OpenGL从MPEG视频中获取像素数据.使用摄像机代替视频输入并不难.基本上,您可以将上述程序中CodecOutputSurface类中使用的纹理用作捕获请求的输出纹理.

This specific program uses OpenGL to fetch the pixel data from an MPEG video. It's not difficult to use the camera as input instead of a video. You can basically use the texture used in the CodecOutputSurface class from the mentioned program as output texture for your capture request.

这篇关于Android camera2 jpeg帧率的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆