人脸检测使用Android Camera2 API绘制圆圈 [英] Face detection & draw circle using Android Camera2 API

查看:124
本文介绍了人脸检测使用Android Camera2 API绘制圆圈的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当前,我正在尝试将Camera2.Face转换为实际视图的矩形,以便在Camera2 API检测到的面部上绘制圆圈.

Currently I am trying to convert Camera2.Face to actual view's rect in order to draw circle over the face detected by the Camera2 API.

我可以通过以下代码将面孔的数量及其数据输入到回调中:

I am able to get number of faces and its data into Callback by below code:

private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback() {
    private void process(CaptureResult result) {
        Integer mode = result.get(CaptureResult.STATISTICS_FACE_DETECT_MODE);
        Face [] faces = result.get(CaptureResult.STATISTICS_FACES);
        if(faces != null && mode != null)
            Log.e("tag", "faces : " + faces.length + " , mode : " + mode ); 
    }

    @Override
    public void onCaptureProgressed(CameraCaptureSession session, CaptureRequest request, CaptureResult partialResult) {
        process(partialResult);
    }

    @Override
    public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result) {
        process(result);
    }
}

到目前为止,我尝试使用以下代码将Face rect转换为实际的视图坐标(似乎不起作用):

I tried below code so far to convert Face rect to actual view co-ordinates(seems like it is not working):

/**
* Callback from the CameraCaptureSession.CaptureCallback
*/
@Override
public void onFaceDetection(Face[] faces) {
    if (mCameraView != null) {
        setFaceDetectionMatrix();
        setFaceDetectionLayout(faces);
    }
}

/**
 * This method gets the scaling values of the face in matrix
 */
private void setFaceDetectionMatrix() {
    // Face Detection Matrix
    mFaceDetectionMatrix = new Matrix();
    // Need mirror for front camera.
    boolean mirror = mCameraView.getFacing() == CameraView.FACING_FRONT;
    mFaceDetectionMatrix.setScale(mirror ? -1 : 1, 1);
    mFaceDetectionMatrix.postRotate(mCameraDisplayOrientation);

    Rect activeArraySizeRect = mCameraView.getCameraCharacteristics().get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);
    Log.i("Test", "activeArraySizeRect1: (" + activeArraySizeRect + ") -> " + activeArraySizeRect.width() + ", " + activeArraySizeRect.height());
    Log.i("Test", "activeArraySizeRect2: " + cameraOverlayDrawingView.getWidth() + ", " + cameraOverlayDrawingView.getHeight());
    float s1 = cameraOverlayDrawingView.getWidth() / activeArraySizeRect.width();
    float s2 = cameraOverlayDrawingView.getHeight() / activeArraySizeRect.height();
    mFaceDetectionMatrix.postScale(s1, s2);
    mFaceDetectionMatrix.postTranslate(cameraOverlayDrawingView.getWidth() / 2, cameraOverlayDrawingView.getHeight() / 2);
}

/**
 * This method set the matrix for translating rect
 */
private void setFaceDetectionLayout(Face[] faces) {
    if (faces.length == 0) {
        cameraOverlayDrawingView.setHaveFaces(false, null);
    } else if (faces.length > 0) {
        List<Rect> faceRects;
        faceRects = new ArrayList<>();
        for (int i = 0; i < faces.length; i++) {
            Log.i("Test", "Activity face" + i + " bounds: " + faces[i].getBounds());
            if (faces[i].getScore() > 50) {
                int left = faces[i].getBounds().left;
                int top = faces[i].getBounds().top;
                int right = faces[i].getBounds().right;
                int bottom = faces[i].getBounds().bottom;

                Rect uRect = new Rect(left, top, right, bottom);
                RectF rectF = new RectF(uRect);
                mFaceDetectionMatrix.mapRect(rectF);
                uRect.set((int) rectF.left, (int) rectF.top, (int) rectF.right, (int) rectF.bottom);
                Log.i("Test", "Activity rect" + i + " bounds: " + uRect);
                    faceRects.add(uRect);
            }
        }
        cameraOverlayDrawingView.setHaveFaces(true, faceRects);
    }
}

推荐答案

新功能: 我已经处理好所有手机旋转操作.我估计offsetDxDy取决于我的布局,但是如果我要告诉你真相,我不知道为什么我将值设为100.它在我的华为P9上运行良好,并且我已经通过经验找到了它.我仍然没有尝试找出是否取决于手机,XML布局还是两者.

NEW: I've manage all my phone rotations. The offsetDxDy I guess depends on my layout, but if I've to tell you the truth I don't know why I put a value of 100. It works well on my Huawei P9 and I've found it in an empirical way. I still not have tried to find out if depends on my phone or on my XML layout or both.

无论如何,现在都找到了矩阵,因此您可以对其进行调整,以使其适合您的需求.

Anyway the Matrices now are found, so you could adapt them so that they can fit your needs.

注意:我的setRotation不太通用,因为我没有对其进行参数化

Note: my setRotation is not so general, because I didn't parametrized it upon

int orientationOffset = mCameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);

您可以尝试执行此操作,以使完整的常规代码可以与SENSOR_ORIENTATION一起使用,而不同于本示例中的270.

You can try do do it so that to have a full general code working with SENSOR_ORIENTATION different from the one of this example that is 270.

因此,此代码适用于带有方向为270的硬件摄像头传感器的手机.

华为P9拥有它.

请给您一个使旋转方向与HW传感器方向绑定的想法,该方向在我的P9上也能很好地工作(但是我没有其他硬件可以测试)

Just to give you an idea of making the rotation bind to se HW sensor orientation that also works well on my P9 (but I don't have any other hardware to test)

if (mSwappedDimensions) {
    // Display Rotation 0
    mFaceDetectionMatrix.setRotate(orientationOffset);
    mFaceDetectionMatrix.postScale(mirror ? -s1 : s1, s2);
    mFaceDetectionMatrix.postTranslate(mPreviewSize.getHeight() + offsetDxDy, mPreviewSize.getWidth() + offsetDxDy);
} else {
    // Display Rotation 90 e 270
    if (displayRotation == Surface.ROTATION_90) {
        mFaceDetectionMatrix.setRotate(orientationOffset + 90);
        mFaceDetectionMatrix.postScale(mirror ? -s1 : s1, s2);
        mFaceDetectionMatrix.postTranslate(mPreviewSize.getWidth() + offsetDxDy, -offsetDxDy);
    } else if (displayRotation == Surface.ROTATION_270) {
        mFaceDetectionMatrix.setRotate(orientationOffset + 270);
        mFaceDetectionMatrix.postScale(mirror ? -s1 : s1, s2);
        mFaceDetectionMatrix.postTranslate(-offsetDxDy, mPreviewSize.getHeight() + offsetDxDy);
    }
}

这是我的最终代码(也可以在GitHub上获得)

Here my final code (also available on GitHub)

int orientationOffset = mCameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
Rect activeArraySizeRect = mCameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);

// Face Detection Matrix
mFaceDetectionMatrix = new Matrix();

Log.i("Test", "activeArraySizeRect1: (" + activeArraySizeRect + ") -> " + activeArraySizeRect.width() + ", " + activeArraySizeRect.height());
Log.i("Test", "activeArraySizeRect2: " + mPreviewSize.getWidth() + ", " + mPreviewSize.getHeight());
float s1 = mPreviewSize.getWidth() / (float)activeArraySizeRect.width();
float s2 = mPreviewSize.getHeight() / (float)activeArraySizeRect.height();
//float s1 = mOverlayView.getWidth();
//float s2 = mOverlayView.getHeight();
boolean mirror = (facing == CameraCharacteristics.LENS_FACING_FRONT); // we always use front face camera
boolean weAreinPortrait = true;
int offsetDxDy = 100;
if (mSwappedDimensions) {
    // Display Rotation 0
    mFaceDetectionMatrix.setRotate(270);
    mFaceDetectionMatrix.postScale(mirror ? -s1 : s1, s2);
    mFaceDetectionMatrix.postTranslate(mPreviewSize.getHeight() + offsetDxDy, mPreviewSize.getWidth() + offsetDxDy);
} else {
    // Display Rotation 90 e 270
    if (displayRotation == Surface.ROTATION_90) {
        mFaceDetectionMatrix.setRotate(0);
        mFaceDetectionMatrix.postScale(mirror ? -s1 : s1, s2);
        mFaceDetectionMatrix.postTranslate(mPreviewSize.getWidth() + offsetDxDy, -offsetDxDy);
    } else if (displayRotation == Surface.ROTATION_270) {
        mFaceDetectionMatrix.setRotate(180);
        mFaceDetectionMatrix.postScale(mirror ? -s1 : s1, s2);
        mFaceDetectionMatrix.postTranslate(-offsetDxDy, mPreviewSize.getHeight() + offsetDxDy);
    }
}

这是您在其中的公共github存储库 可以找到代码: https://github.com/shadowsheep1/android-camera2-api-face-recon .希望可以 帮你.

This is the public github repo where you can find the code: https://github.com/shadowsheep1/android-camera2-api-face-recon. Hope it could help you.

无论如何也要给您一些理论,您正在做的是2D平面转换.我的意思是您有一个平面(硬件传感器),并且必须在预览平面上将对象重新映射到该平面上.

Anyway just to give you also some theory, what you are doing is a 2D plane transformation. I mean you have a plane (the HW Sensor) and you have to remap the object on that plane on your preview plane.

所以您必须照顾:

  • 旋转:这取决于您的硬件传感器旋转和电话旋转.
  • 镜像:水平镜像取决于是否使用正面摄像头,垂直镜像取决于电话旋转).镜像在缩放矩阵中以-"号完成.
  • 平移:这取决于对象通过旋转放置的位置(还取决于您要处理的旋转中心)和平移.因此,您必须在预览中替换查看对象.
  • Rotation: That depends on your HW Sensor rotation and the Phone Rotation.
  • Mirroring: Horizontal mirroring that depends if you are using the front face camera or not and the Vertical mirroring that depends on the phone rotation). Mirroring is done with a '-' sign in the scaling matrix.
  • Translation: That depends where your object has been placed by the rotation (that depends also from which rotation center your are dealing with) and translation. So you have to replace in your preview View your objects.

数学理论

前段时间,我也在博客中写了一些技术文章,但它们都是意大利语的.

I've also write some technical post in my blog some time ago but they are in Italian.

  • http://www.versionestabile.it/blog/trasformazioni-nel-piano/
  • http://www.versionestabile.it/blog/coordinate-omogenee/

这篇关于人脸检测使用Android Camera2 API绘制圆圈的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆