如何使用Microsoft场景理解SDK和hololens2将Unity场景与播放器的物理空间对齐? [英] How do I use the Microsoft Scene Understanding SDK and hololens2 to align the Unity Scene to the player's physical room?

查看:83
本文介绍了如何使用Microsoft场景理解SDK和hololens2将Unity场景与播放器的物理空间对齐?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

已关闭,因为它需要更多详细信息,但人们一直对此表示反对.所以我会尽力添加一些.这个问题的格式不正确,因为它实际上是2个问题.1)如何从文档中修复此代码?2)如何将统一场景与播放器的物理空间对齐?

我觉得#1专注于答案,但是#2是标题问题.我的主要问题是所创建的遮挡网格凹凸不平/不平坦,从而使全息图与地板之间的碰撞非常不稳定.我最终创建了一个基本菜单,其中的玩家面前是一个立方体的全息图.然后指示玩家放下立方体,直到它均匀地坐在地板上.然后,我确保所有全息图都不会低于碰撞平面或通过代码降到该设定值以下.希望这个想法可以帮助任何人.谢谢!

当玩家在Hololens2中加载到一个统一场景中时,"Unity Floor"平面与物理地板不匹配.使用hololens2和MRTK,Unity场景的原点锁定为玩家的头部为0,0,0.

我正在尝试使用Microsoft Scene谅解SDK将Unity Scene环境Y位置设置为物理房间的地板.我目前可以访问地板场景对象,但是当我尝试执行SpatialCoordinateSystem部分时,无法使用.ToUnity()方法转换4x4矩阵.我必须将vector3 .ToUnity()调用更改为.ToUnityVector3();.但是我找不到矩阵的类似方法.

是否已弃用本文档中提到的.ToUnity()方法?我是否缺少对某事的参考?(请参阅图片以获取参考)

我非常感谢您在此特定问题上的任何帮助,或者是将统一的场景与Hololens2用户的物理环境对齐所面临的总体挑战.

我正在关注此处提供的信息

解决方案

Hernando-MSFT的建议对我有用,但缺少ToUnity()引用.将所有场景全息图对准地板的挑战仍然存在,但这是进步!非常感谢Hernando!

我基本上只需要添加.value即可访问矩阵数据,然后实现此命名空间:

 命名空间NumericsConversion{公共静态类NumericsConversionExtensions{公共静态UnityEngine.Vector3 ToUnity(this System.Numerics.Vector3 v)=>新的UnityEngine.Vector3(v.X,v.Y,-v.Z);公共静态UnityEngine.Quaternion ToUnity(this System.Numerics.Quaternion q)=>新的UnityEngine.Quaternion(-q.X,-q.Y,q.Z,q.W);公共静态UnityEngine.Matrix4x4 ToUnity(此System.Numerics.Matrix4x4 m)=>新的UnityEngine.Matrix4x4(新的Vector4(m.M11,m.M12,-m.M13,m.M14),新的Vector4(m.M21,m.M22,-m.M23,m.M24),新的Vector4(-m.M31,-m.M32,m.M33,-m.M34),新的Vector4(m.M41,m.M42,-m.M43,m.M44));公共静态System.Numerics.Vector3 ToSystem(this UnityEngine.Vector3 v)=>新的System.Numerics.Vector3(v.x,v.y,-v.z);公共静态System.Numerics.Quaternion ToSystem(this UnityEngine.Quaternion q)=>新的System.Numerics.Quaternion(-q.x,-q.y,q.z,q.w);公共静态System.Numerics.Matrix4x4 ToSystem(this UnityEngine.Matrix4x4 m)=>新的System.Numerics.Matrix4x4(m.m00,m.m10,-m.m20,m.m30,m.m01,m.m11,-m.m21,m.m31,-m.m02,-m.m12,m.m22,-m.m32,m.m03,m.m13,-m.m23,m.m33);}} 

我还尝试在运行时将具有刚体和碰撞器的立方体放下,以确定空间数据平台的Y值.实际上,这似乎相对较好,但绝对不是最精确的解决方案.

Edit: This is closed because it needs more details, yet people keep upvoting it. So I'll do my best to add some. This question was poorly formatted, as it was really 2 questions. 1) How do I fix this code from the documentation? 2) How do I align the unity scene to the player's physical space?

I feel like #1 was focused on the answer, but #2 was the title question. My main issues were that the occlusion mesh being created was bumpy/not flat, making collisions between holograms and the floor very wonky. I ended up creating a basic menu with the hologram of a cube in front of the player. The player was then instructed to lower the cube until it sat evenly on the floor. I then made sure all holograms didn't fall lower than that set value, either with collision planes, or via code. Hope that idea can help anyone. Thanks!

When the player loads into a unity scene in the Hololens2, the Unity Floor plane does not match the physical floor. With the hololens2 and MRTK, the Unity scene origin is locked to the players head being 0,0,0.

I am trying to use the Microsoft Scene Understanding SDK to set the Unity Scene environment Y position to the floor in the physical room. I am currently able to access the floor scene object, but when I try to do the SpatialCoordinateSystem portion, I am unable to use the .ToUnity() method to convert the 4x4 Matrix. I had to change the vector3 .ToUnity() call to .ToUnityVector3(); but I am unable to find a similar method for the matrix.

Are the .ToUnity() methods mentioned in this documentation deprecated? Am I missing a reference to something? (see images for references)

I greatly appreciate any assistance here, either in this specific issue, or the overall challenge of aligning a unity scene to a Hololens2 user's physical environment.

I'm following the information provided here https://docs.microsoft.com/en-us/windows/mixed-reality/develop/platform-capabilities-and-apis/scene-understanding-sdk

解决方案

Hernando - MSFT's suggestions worked for me the missing ToUnity() references. The challenge of aligning all the scene holograms to the floor still remains, but this is progress! Thank you so much Hernando!

I basically just had to add .value to access the matrix data, and then implement this namespace :

namespace NumericsConversion
{
    public static class NumericsConversionExtensions
    {
        public static UnityEngine.Vector3 ToUnity(this System.Numerics.Vector3 v) => new UnityEngine.Vector3(v.X, v.Y, -v.Z);
        public static UnityEngine.Quaternion ToUnity(this System.Numerics.Quaternion q) => new UnityEngine.Quaternion(-q.X, -q.Y, q.Z, q.W);
        public static UnityEngine.Matrix4x4 ToUnity(this System.Numerics.Matrix4x4 m) => new UnityEngine.Matrix4x4(
            new Vector4(m.M11, m.M12, -m.M13, m.M14),
            new Vector4(m.M21, m.M22, -m.M23, m.M24),
            new Vector4(-m.M31, -m.M32, m.M33, -m.M34),
            new Vector4(m.M41, m.M42, -m.M43, m.M44));

        public static System.Numerics.Vector3 ToSystem(this UnityEngine.Vector3 v) => new System.Numerics.Vector3(v.x, v.y, -v.z);
        public static System.Numerics.Quaternion ToSystem(this UnityEngine.Quaternion q) => new System.Numerics.Quaternion(-q.x, -q.y, q.z, q.w);
        public static System.Numerics.Matrix4x4 ToSystem(this UnityEngine.Matrix4x4 m) => new System.Numerics.Matrix4x4(
            m.m00, m.m10, -m.m20, m.m30,
            m.m01, m.m11, -m.m21, m.m31,
           -m.m02, -m.m12, m.m22, -m.m32,
            m.m03, m.m13, -m.m23, m.m33);
    }
}

I also tried dropping a cube with a rigid-body and collider at run time to determine the Y value of the spacial data floor. This actually seems to work relatively well, but its definitely not the most precise solution.

这篇关于如何使用Microsoft场景理解SDK和hololens2将Unity场景与播放器的物理空间对齐?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆