Hololens 2&Unity-如何使用CameraToWorldMatrix校正全息图的位置? [英] Hololens 2 & Unity - How to use the CameraToWorldMatrix to correct the position of a hologram?

查看:262
本文介绍了Hololens 2&Unity-如何使用CameraToWorldMatrix校正全息图的位置?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在Unity 2019.4.6中使用Hololens 2.

使用PhotoCaptureFrame类,我正在拍照,对其进行分析以找到特定的标记,然后在标记所在的位置创建全息图.我的代码与Unity编辑器中的网络摄像头配合良好.但是在hololens上,全息图和标记之间存在偏移.

我尝试使用cameraToWorld矩阵(通过photocaptureframe类获得)进行校正.如果我理解得很好,它代表hololens相机和RGB相机(拍摄照片的相机)之间的偏移量.

问题是:我不知道我是否很好地使用了这个矩阵,我的全息图位置总是很奇怪...我试图使用本教程: https://forum.unity.com/threads/holographic-photo-blending-with-photocapture.416023/结果也是错误的!实例化的四边形被旋转并位于原始起点之后(当我启动该应用程序时).它不会跟随相机.

我需要转换矩阵吗?有人可以告诉我我做错了什么吗?预先感谢!

获取矩阵的简单示例:对于CameraToWorld矩阵和Projection矩阵,其输出为true,但是当您从其他位置移动并拍摄图片时,矩阵不会更新.

 使用系统;使用System.Collections.Generic;使用UnityEngine;使用UnityEngine.Windows.WebCam;公共类HoloLensSnapshotTest:MonoBehaviour{PhotoCapture m_PhotoCaptureObj;CameraParameters m_CameraParameters;bool m_CapturingPhoto = false;无效Start(){初始化();}[ContextMenu("TakePicture"))]public void TakePicture()//现在由按钮调用{如果(m_CapturingPhoto){返回;}m_CapturingPhoto = true;Debug.Log(正在拍照...");m_PhotoCaptureObj.TakePhotoAsync(OnPhotoCaptured);}无效的Initialize(){Debug.Log(正在初始化...");列表<解决方案>分辨率=新的List< Resolution>(PhotoCapture.SupportedResolutions);选择的分辨率分辨率=分辨率[0];m_CameraParameters =新的CameraParameters(WebCamMode.PhotoMode);m_CameraParameters.cameraResolutionWidth = selectedResolution.width;m_CameraParameters.cameraResolutionHeight = selectedResolution.height;m_CameraParameters.hologramOpacity = 0.0f;m_CameraParameters.pixelFormat = CapturePixelFormat.BGRA32;PhotoCapture.CreateAsync(false,OnCreatedPhotoCaptureObject);}void OnCreatedPhotoCaptureObject(PhotoCapture captureObject){m_PhotoCaptureObj = captureObject;m_PhotoCaptureObj.StartPhotoModeAsync(m_CameraParameters,OnStartPhotoMode);}void OnStartPhotoMode(PhotoCapture.PhotoCaptureResult结果){m_CapturingPhoto = false;Debug.Log("Ready");}void OnPhotoCaptured(PhotoCapture.PhotoCaptureResult结果,PhotoCaptureFrame photoCaptureFrame){Matrix4x4 cameraToWorldMatrix;bool cameraToWorldMatrixResult = photoCaptureFrame.TryGetCameraToWorldMatrix(out cameraToWorldMatrix);Matrix4x4 projectionMatrix;bool projectionMatrixResult = photoCaptureFrame.TryGetProjectionMatrix(out projectionMatrix);Debug.Log("CamToWorld:" + Environment.NewLine + cameraToWorldMatrixResult + Environment.NewLine + $"{cameraToWorldMatrix.ToString()}"));Debug.Log("Projection:" + Environment.NewLine + projectionMatrixResult + Environment.NewLine + $"{projectionMatrix.ToString()}"));Debug.Log(拍照!");m_CapturingPhoto = false;}} 

我的矩阵是:相机走向世界:

 <代码> 0.00650 -0.99959 -0.02805 0.00211-0.99965 -0.00577 -0.02588 -0.049990.02571 0.02821 -0.99927 -0.012160.00000 0.00000 0.00000 1.00000 

投影:

 <代码> -0.06754 1.52561 0.00000 0.00000-2.71531 -0.12021 0.00000 0.000000.05028 0.01975 -1.00401 -0.200400.00000 0.00000 0.00000 1.00000 

实际上,在MR文档的早期版本中,它已经演示了如何使用着色器代码在摄像机图像的特定3d位置上查找或绘制:a href ="https://github.com/MicrosoftDocs/mixed-reality/commit/a0b2c53dc295db0332832ba00b60bd7a4962e3d9#diff-39e0f47742e9a5498952f06fa42c0099R91" rel ="nofollow noreferrer">混合版本可定位./p>

但是现在文档的这一部分已被删除.幸运的是,我在Github上找到了一个解决方案,该解决方案在Unity C#中实现了相同的功能,并且只修改了几行代码: cameraToWorld.cs

以下是该解决方案的引文,您可以以编程方式将此数据存储在项目中.如果您还有其他问题,请告诉我

  void OnPhotoCaptured(PhotoCapture.PhotoCaptureResult结果,PhotoCaptureFrame photoCaptureFrame){Matrix4x4 cameraToWorldMatrix;photoCaptureFrame.TryGetCameraToWorldMatrix(out cameraToWorldMatrix);Matrix4x4 projectionMatrix;photoCaptureFrame.TryGetProjectionMatrix(out projectionMatrix);var imagePosZeroToOne = new Vector2(pixelPos.x/imageWidth,1-(pixelPos.y/imageHeight));var imagePosProjected =(imagePosZeroToOne * 2)-新的Vector2(1,1);//-1至1个空格var cameraSpacePos = UnProjectVector(projectionMatrix,new Vector3(imagePosProjected.x,imagePosProjected.y,1));var worldSpaceCameraPos = cameraToWorldMatrix.MultiplyPoint(Vector3.zero);//相机在世界空间中的位置var worldSpaceBoxPos = cameraToWorldMatrix.MultiplyPoint(cameraSpacePos);//世界空间中的射线点RaycastHit热门歌曲;bool hitToMap = Physics.Raycast(worldSpaceCameraPos,worldSpaceBoxPos-worldSpaceCameraPos,out hit,20,SpatialMappingManager.Instance.LayerMask);}公共静态Vector3 UnProjectVector(Matrix4x4项目,Vector3至){Vector3 from = new Vector3(0,0,0);var axsX = proj.GetRow(0);var axsY = proj.GetRow(1);var axsZ = proj.GetRow(2);from.z = to.z/axsZ.z;from.y =(to.y-(from.z * axsY.z))/axsY.y;from.x =(to.x-(from.z * axsX.z))/axsX.x;从...返回} 

I'm using the Hololens 2 with Unity 2019.4.6.

With the PhotoCaptureFrame class, I'm taking a picture, analyzing it to find a specific marker and then I create an hologram where the marker is. My code is working well with a webcam in the Unity Editor. But on the hololens, there is an offset between the hologram and the marker.

I try to correct it with cameraToWorld matrix (that I get with the photocaptureframe class). If I understand well, it represents the offset between the hololens camera and the RGB camera (the one taking a picture).

The problem is : I don't know if I'm using well this matrix, my hologram position is always weird... I tried to use this tutorial : https://forum.unity.com/threads/holographic-photo-blending-with-photocapture.416023/ And the result is wrong too ! The instantiated quad is rotated and behind the original starting point (when I start the app). It doesn't follow the camera.

Do I need to convert the matrix ? Can someone explains to me what I'm doing wrong please ? Thanks in advance !

Simple example to get matrices : It prints out true for the CameraToWorld matrix and The Projection matrix but the matrices are not updated when you move and take a picture from another location.

using System;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Windows.WebCam;

public class HoloLensSnapshotTest : MonoBehaviour
{
    PhotoCapture m_PhotoCaptureObj;
    CameraParameters m_CameraParameters;
    bool m_CapturingPhoto = false;

    void Start()
    {
        Initialize();
    }


    [ContextMenu("TakePicture")]
    public void TakePicture() // called by button for now
    {
        if (m_CapturingPhoto)
        {
            return;
        }

        m_CapturingPhoto = true;
        Debug.Log("Taking picture...");
        m_PhotoCaptureObj.TakePhotoAsync(OnPhotoCaptured);
    }

    void Initialize()
    {
        Debug.Log("Initializing...");
        List<Resolution> resolutions = new List<Resolution>(PhotoCapture.SupportedResolutions);
        Resolution selectedResolution = resolutions[0];

        m_CameraParameters = new CameraParameters(WebCamMode.PhotoMode);
        m_CameraParameters.cameraResolutionWidth = selectedResolution.width;
        m_CameraParameters.cameraResolutionHeight = selectedResolution.height;
        m_CameraParameters.hologramOpacity = 0.0f;
        m_CameraParameters.pixelFormat = CapturePixelFormat.BGRA32;

        PhotoCapture.CreateAsync(false, OnCreatedPhotoCaptureObject);
    }

    void OnCreatedPhotoCaptureObject(PhotoCapture captureObject)
    {
        m_PhotoCaptureObj = captureObject;
        m_PhotoCaptureObj.StartPhotoModeAsync(m_CameraParameters, OnStartPhotoMode);
    }

    void OnStartPhotoMode(PhotoCapture.PhotoCaptureResult result)
    {
        m_CapturingPhoto = false;

        Debug.Log("Ready");
    }


    void OnPhotoCaptured(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
    {
        Matrix4x4 cameraToWorldMatrix;
        bool cameraToWorldMatrixResult = photoCaptureFrame.TryGetCameraToWorldMatrix(out cameraToWorldMatrix);

        Matrix4x4 projectionMatrix;
        bool projectionMatrixResult = photoCaptureFrame.TryGetProjectionMatrix(out projectionMatrix);

        Debug.Log("CamToWorld : " + Environment.NewLine + cameraToWorldMatrixResult + Environment.NewLine + $"{cameraToWorldMatrix.ToString()}");
        Debug.Log("Projection : " + Environment.NewLine + projectionMatrixResult + Environment.NewLine + $"{projectionMatrix.ToString()}");
        Debug.Log("Took picture!");

        m_CapturingPhoto = false;
    }
}

My matrix are : Camera To World :

0.00650   -0.99959  -0.02805    0.00211
-0.99965  -0.00577  -0.02588   -0.04999
0.02571    0.02821  -0.99927   -0.01216
0.00000    0.00000   0.00000    1.00000

Projection :

-0.06754   1.52561   0.00000  0.00000
-2.71531  -0.12021   0.00000  0.00000
0.05028    0.01975  -1.00401 -0.20040
0.00000    0.00000   0.00000  1.00000

解决方案

Actually, in an early version of the MR documentation, it has demonstrated how to find or draw at a specific 3d location on a camera image with shader code: mixed-reality-docs/locatable-camera.md.

But now this part of the document has been deleted. Fortunately, I found a solution on Github that implements the same function in Unity C# and only modified a few lines of code:cameraToWorld.cs

The following is a quote from the solution, you can programmatically store this data in your project. If you have any other question, please let me know:

void OnPhotoCaptured(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
{
    Matrix4x4 cameraToWorldMatrix;
    photoCaptureFrame.TryGetCameraToWorldMatrix(out cameraToWorldMatrix);
    Matrix4x4 projectionMatrix;
    photoCaptureFrame.TryGetProjectionMatrix(out projectionMatrix);


    var imagePosZeroToOne = new Vector2(pixelPos.x / imageWidth, 1 - (pixelPos.y / imageHeight));
    var imagePosProjected = (imagePosZeroToOne * 2) - new Vector2(1, 1);    // -1 to 1 space

    var cameraSpacePos = UnProjectVector(projectionMatrix, new Vector3(imagePosProjected.x, imagePosProjected.y, 1));
    var worldSpaceCameraPos = cameraToWorldMatrix.MultiplyPoint(Vector3.zero);     // camera location in world space
    var worldSpaceBoxPos = cameraToWorldMatrix.MultiplyPoint(cameraSpacePos);   // ray point in world space
 
    RaycastHit hit;
    bool hitToMap = Physics.Raycast(worldSpaceCameraPos, worldSpaceBoxPos - worldSpaceCameraPos, out hit, 20, SpatialMappingManager.Instance.LayerMask);
}

public static Vector3 UnProjectVector(Matrix4x4 proj, Vector3 to)
{
    Vector3 from = new Vector3(0, 0, 0);
    var axsX = proj.GetRow(0);
    var axsY = proj.GetRow(1);
    var axsZ = proj.GetRow(2);
    from.z = to.z / axsZ.z;
    from.y = (to.y - (from.z * axsY.z)) / axsY.y;
    from.x = (to.x - (from.z * axsX.z)) / axsX.x;
    return from;
}

这篇关于Hololens 2&amp;Unity-如何使用CameraToWorldMatrix校正全息图的位置?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆