理解 ARKit 中的坐标空间 [英] Understand coordinate spaces in ARKit

查看:35
本文介绍了理解 ARKit 中的坐标空间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已阅读所有关于 ARKit 的 Apple 指南,并观看了WWDC 视频.但我无法理解绑定到的坐标系如何:

  1. 真实世界
  2. 一个设备
  3. 3D 场景

相互连接.

我可以添加一个对象,例如一个SCNPlane:

let stripe = SCNPlane(width: 0.005, height: 0.1)让 stripeNode = SCNNode(geometry: stripe)scene.rootNode.addChildNode(stripeNode)

这将产生一条垂直方向的白色条纹,无论此时设备的方向如何.这意味着坐标系以某种方式与重力绑定!但是,如果我尝试打印 upAxis SCNScene 的属性 每帧,无论我如何旋转 iPhone,都是一样的.我也尝试打印 stripeNode.worldTransform 并且它也没有改变.

欢迎任何帮助理解 ARKit 坐标.

解决方案

默认情况下,ARKit 中的世界坐标系基于 ARKit 对设备周围真实世界的理解.(是的,由于设备的运动感应硬件,它面向重力.)

此外,默认情况下,当您使用 ARSCNView 在 ARKit 会话中显示 SceneKit 内容时,场景的 rootNode 的坐标系与 ARKit 世界坐标系匹配.这意味着向上"in 场景空间总是与up"相同在现实世界中,无论您如何移动设备.

<块引用>

旁白:场景属性 API,您在其中找到了 upAxis 不是您认为的那样.这些用于从文件加载的场景的固有设置 - 例如有人向您发送了一个在 3D 创作工具中设计的 DAE 文件,其中 Z 轴表示向上",因此该属性告诉 SceneKit 旋转该文件中的数据以适合 SceneKit 的 Y 向上坐标系约定.>

如果你想找到某个 SceneKit 节点相对于世界空间的上轴,worldUpsimdWorldUp 属性就是你想要的.但是,这些相对于世界空间.没有用于询问世界空间本身指向哪个方向的 API,因为这意味着相对于其他事物的方向……而世界空间是绝对"方向.其他一切都相对的坐标系.所以你必须依赖定义.

这种坐标系匹配的好处在于,您可以非常轻松地将 SceneKit 事物放入现实世界的空间中.您的代码在 0, 0, 0 处放置了一个白色条纹,因为您没有明确地给它一个位置.在 ARKit 会话中,0, 0, 0 是您开始会话时设备的位置……因此您应该能够运行该代码,然后向后退一步以查看您的 白色条纹.


简而言之,ARKit 的坐标系模型是这样的:世界是固定的,你的设备/相机在其中移动.

这意味着如果您想对设备的当前位置/方向做任何事情,您需要转换到相机空间.

如果您使用 SceneKit,那很简单:view.pointOfView 为您提供 SceneKit 相机节点,因此您可以...

  • 将子节点添加到该节点,它们将保持附加"状态移动相机时将其移动到相机(例如,HUD 元素,或者如果您正在制作类似 Minecraft 的作品,则可以使用镐)
  • 使用相机节点作为约束的目标,在移动时场景与相机交互(例如让游戏角色看着相机)
  • 使用相机节点的变换(或用于访问变换部分的各种便利属性)来定位场景中的其他内容(例如 node.position = view.pointOfView.simdWorldFront + float3(0, 0, -0.5) 将一个节点放置在相机当前所在位置前方 50 厘米处)

I've read all Apple guides about ARKit, and watched a WWDC video. But I can't understand how do coordinate systems which are bind to:

  1. A real world
  2. A device
  3. A 3D scene

connect to each other.

I can add an object, for example a SCNPlane:

let stripe = SCNPlane(width: 0.005, height: 0.1)
let stripeNode = SCNNode(geometry: stripe)
scene.rootNode.addChildNode(stripeNode)

This will produce a white stripe, which will be oriented vertically, no matter how the device will be oriented at that moment. That means the coordinate system is somehow bound to the gravity! But if I try to print upAxis attribute of the SCNScene per frame, it will be the same, no matter how I rotate the iPhone. I also tried to print stripeNode.worldTransform and it also doesn't change.

Any help in understanding ARKit coordinates is welcomed.

解决方案

By default, the world coordinate system in ARKit is based on ARKit's understanding of the real world around your device. (And yes, it's oriented to gravity, thanks to the device's motion sensing hardware.)

Also by default, when you use ARSCNView to display SceneKit content in an ARKit session, the coordinate system of the scene's rootNode is matched to the ARKit world coordinate system. That means that "up" in scene space is always the same as "up" in real-world space, no matter how you move your device.

Aside: The scene attributes API where you found an upAxis aren't what you think they are. Those are for the inherent settings of scenes loaded from files — e.g. someone sends you a DAE file designed in a 3D authoring tool where the Z axis means "up", so that attribute tells SceneKit to rotate the data in that file to fit SceneKit's Y-up coordinate system convention.

If you want to find the up axis of some SceneKit node relative to world space, the worldUp or simdWorldUp property is what you want. However, those are relative to world space. There's no API for asking what direction world space itself points in, because that would mean a direction relative to something else... and world space is the "absolute" coordinate system that everything else is relative to. So you have to rely on definition.

The great thing about this matching of coordinate systems is that you can put SceneKit things in real-world space very easily. Your code places a white stripe at 0, 0, 0 because you didn't explicitly give it a position. In an ARKit session, 0, 0, 0 is the position of your device when you started the session... so you should be able to run that code and then take a step backwards to see your white stripe.


In short, the coordinate system model for ARKit is this: The world is fixed, and your device/camera moves within it.

This means that if you want to do anything relative to the current position/orientation of the device, you need a conversion to camera space.

If you're working with SceneKit, that's easy: view.pointOfView gives you the SceneKit camera node, so you can...

  • add child nodes to that node, and they'll stay "attached" to the camera as you move it (e.g. HUD elements, or maybe a pickaxe if you're making a Minecraft-alike)
  • use the camera node as the target of a constraint to make other content in the scene interact with the camera as you move it (e.g. make a game character look at the camera)
  • use the camera node's transform (or the various convenience properties for accessing parts of the transform) to position other content in your scene (e.g. node.position = view.pointOfView.simdWorldFront + float3(0, 0, -0.5) to drop a node 50 cm in front of where the camera is right now)

这篇关于理解 ARKit 中的坐标空间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆