Mobile Vision API-连接新的检测器对象以继续进行帧处理 [英] Mobile Vision API - concatenate new detector object to continue frame processing

查看:59
本文介绍了Mobile Vision API-连接新的检测器对象以继续进行帧处理的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用视觉API提供的新人脸检测功能以及应用程序中的其他帧处理.为此,我需要访问面部检测器处理过的相机框架,并使用面部检测数据将处理器连接起来.

I want to use the new face detection feature that the vision API provides along with additional frame processing in an application. For this, I need to have access to the camera frame that was processed by the face detector, and concatenate a processor using the face detected data.

正如我在示例中看到的,CameraSource抽象了检测和摄像机访问,而我无权访问正在处理的帧.是否有示例说明如何通过此API获取摄像机框架,或者可能创建并连接接收该摄像机框架的检测器?至少有可能吗?

As I see in the sample, the CameraSource abstracts the detection and camera access, and I can't have access to the frame being processed. Are there examples of how to get the camera frame in this API, or, maybe, create and concatenate a detector that receives it? Is that possible at least?

谢谢, 卢西奥

推荐答案

是的,有可能.您需要创建自己的Detector子类,该子类包装FaceDetector并在detect方法中执行额外的帧处理代码.看起来像这样:

Yes, it is possible. You'd need to create your own subclass of Detector which wraps FaceDetector and executes your extra frame processing code in the detect method. It would look something like this:

class MyFaceDetector extends Detector<Face> {
  private Detector<Face> mDelegate;

  MyFaceDetector(Detector<Face> delegate) {
    mDelegate = delegate;
  }

  public SparseArray<Face> detect(Frame frame) {
    // *** add your custom frame processing code here
    return mDelegate.detect(frame);
  }

  public boolean isOperational() {
    return mDelegate.isOperational();
  }

  public boolean setFocus(int id) {
    return mDelegate.setFocus(id);
  }
}

您将用您的课程包装面部检测器,并将您的课程传递到相机源中.看起来像这样:

You'd wrap the face detector with your class, and pass your class into the camera source. It would look something like this:

    FaceDetector faceDetector = new FaceDetector.Builder(context)
            .build();
    MyFaceDetector myFaceDetector = new MyFaceDetector(faceDetector);

    myFaceDetector.setProcessor(/* include your processor here */);

    mCameraSource = new CameraSource.Builder(context, myFaceDetector)
            .build();

首先将使用原始帧数据调用检测器.

Your detector will be called first with the raw frame data.

请注意,如果旋转设备,图像可能不会直立.您可以通过框架的metadata.getRotation方法获取方向.

Note that the image may not be upright, if the device is rotated. You can get the orientation through the frame's metadata.getRotation method.

请注意:检测方法返回后,您不应访问帧像素数据.由于摄影机源会回收图像缓冲区,因此一旦方法返回,最终将覆盖帧对象的内容.

One word of caution: once the detect method returns, you should not access the frame pixel data. Since the camera source recycles image buffers, the contents of the frame object will be eventually overridden once the method returns.

(附加说明) 您也可以使用 MultiDetector 来避免MyFaceDetector的样板代码,如下所示:

(additional notes) You could also avoid the boilerplate code of MyFaceDetector using a MultiDetector like this:

MultiDetector multiDetector = new MultiDetector.Builder()
    .add(new FaceDetector.Builder(context)
                .build())
    .add(new YourReallyOwnDetector())
    .build();

还请注意,将FaceTrackerFactoryMultiProcessor结合使用 描述在那里.

Also note the use of FaceTrackerFactory in conjuction with MultiProcessor described there.

这篇关于Mobile Vision API-连接新的检测器对象以继续进行帧处理的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆