是否可以将Camera2与Google Vision API一起使用 [英] Is it possible to use Camera2 with Google Vision API

查看:87
本文介绍了是否可以将Camera2与Google Vision API一起使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以仅将Camera2与Google Vision API一起使用来检测人脸?我找不到整合它的方法.

Is it possible to detect faces using Camera2 with Google Vision API only ? I could not find a way to integrate it.

推荐答案

是的,可以将Camera2 API与Google Vision API一起使用.

Yes, It is possible to use Camera2 API with Google Vision API.

首先,Google Vision API人脸检测器会收到框架对象,该对象用于分析(检测人脸及其地标).

To start with, the Google Vision API Face Detector receives a Frame object that uses to analyze (detect faces and its landmarks).

Camera1 API以NV21图像格式提供预览帧,这对我们来说是理想的选择. Google Vision Frame.Builder支持

The Camera1 API provides the preview frames in NV21 image format, which is ideal for us. The Google Vision Frame.Builder supports both setImageData (ByteBuffer in NV16, NV21 or YV12 image format) and setBitmap to use a Bitmap as the Preview Frames to process.

您的问题是Camera2 API提供了不同格式的预览帧.它是 YUV_420_888 .为了使一切正常,您必须将预览帧转换为支持的格式之一.

Your issue is that the Camera2 API provides the preview frames in a different format. It is YUV_420_888. To make everything work you have to convert the preview frames into one of the supported formats.

一旦您从 ImageReader 中获得Camera2预览帧,则图片,您可以使用此函数将其转换为受支持的格式(NV21这种情况).

Once you get the Camera2 Preview Frames from your ImageReader as Image you can use this function to convert it to a supported format (NV21 in this case).

private byte[] convertYUV420888ToNV21(Image imgYUV420) {
    // Converting YUV_420_888 data to YUV_420_SP (NV21).
    byte[] data;
    ByteBuffer buffer0 = imgYUV420.getPlanes()[0].getBuffer();
    ByteBuffer buffer2 = imgYUV420.getPlanes()[2].getBuffer();
    int buffer0_size = buffer0.remaining();
    int buffer2_size = buffer2.remaining();
    data = new byte[buffer0_size + buffer2_size];
    buffer0.get(data, 0, buffer0_size);
    buffer2.get(data, buffer0_size, buffer2_size);
    return data;
}

然后,您可以使用返回的字节[]创建Google视觉框架:

Then you can use the returned byte[] to create a Google Vision Frame:

outputFrame = new Frame.Builder()
    .setImageData(nv21bytes, mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.NV21)
    .setId(mPendingFrameId)
    .setTimestampMillis(mPendingTimeMillis)
    .setRotation(mSensorOrientation)
    .build();

最后,您使用创建的框架调用检测器:

Finally, you call the detector with the created Frame:

mDetector.receiveFrame(outputFrame);

无论如何,如果您想了解更多信息,可以在GitHub上免费测试我的工作示例: Camera2Vision .希望我有所帮助:)

Anyway, if you want to know more about this you can test my working example available for free on GitHub: Camera2Vision. I hope I've helped :)

这篇关于是否可以将Camera2与Google Vision API一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆