ARKit 保存对象位置并在任何下一个会话中查看它 [英] ARKit save object position and see it in any next session

查看:21
本文介绍了ARKit 保存对象位置并在任何下一个会话中查看它的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在为一个使用 ARKit 的项目工作.我需要保存一个对象位置,我想在下一次应用程序启动时看到它所在的位置.例如,在我的办公室,我在门上贴了一些文字,然后回到家,第二天我希望在 ARKit 中可以看到那个地方的文字.

解决方案

在 iOS 12 中:是的!

ARKit 2",又名 iOS 12 的 ARKit,添加了一组 Apple 称为世界地图持久性和共享"的功能.您可以获取 ARKit 了解的关于其本地环境的所有信息,包括您用来跟踪虚拟内容的真实世界位置的任何 ARAnchor,并将其保存在 ARWorldMap 对象.

然后您可以将该对象序列化为一个文件,并在稍后加载该文件以有效地恢复先前的 AR 会话(如果用户处于同一本地环境中).成功重新定位"到世界地图后,您的会话具有与保存之前相同的所有 ARAnchor,因此您可以使用它来重新创建您的虚拟内容(例如使用 name 保存/恢复的锚点来决定显示哪个 3D 模型).

有关更多信息,请参阅关于 ARKit 2 的 WWDC18 演讲Apple 的 ARKit 文档和示例代码.

否则,可能不会.

在 iOS 12 之前,ARKit 不提供使其本地世界映射的任何结果持久化的方法.您在 AR 会话中所做的一切、您找到的每个点仅在该会话的上下文中定义.如果您根据平面检测、命中测试和/或用户输入放置一些虚拟内容,则该位置的参考系与您的设备在会话开始时的位置相关.

由于没有可以跨会话持续存在的参考框架,因此无法定位虚拟内容,使其在(完全)退出/重新启动应用程序后保持在相同的真实世界位置/方向.

但也许...

iOS 11.3 中ARKit 1.5"的新增功能之一是解决此问题的逃生阀:图像检测.如果您的应用程序的用例涉及已知/受控环境(例如,使用虚拟叠加层在艺术博物馆中引导访客),并且该环境中有一些易于识别的 2D 特征(例如著名的绘画),则 ARKit 可以检测它们的位置.

一旦您检测到一个您知道是环境固定特征的图像锚点,您就可以让 AR Session 重新定义围绕该锚点的世界坐标系(请参阅 setWorldOrigin).这样做之后,您实际上拥有一个跨多个会话的坐标系(假设您检测到相同的图像并在每个会话中设置世界原点).

I am working for a project using ARKit. I need to save an object position and I want to see it in my next application launch where ever it was. For example in my office I attached some text on a door and come back to home and next day I wish to see that text on that place where it was is it possible in ARKit.

解决方案

In iOS 12: Yes!

"ARKit 2", aka ARKit for iOS 12, adds a set of features Apple calls "world map persistence and sharing". You can take everything ARKit knows about its local environment, including any ARAnchors you're using to track the real-world positions of virtual content, and save it in an ARWorldMap object.

Then you can serialize that object to a file, and load the file later to effectively resume the earlier AR session (if the user is in the same local environment). Upon successfully "relocalizing" to the world map, your session has all the same ARAnchors it did before saving, so you can use that to re-create your virtual content (e.g. use the name of a saved/restored anchor to decide which 3D model to show).

For more info, see the WWDC18 talk on ARKit 2 or Apple's ARKit docs and sample code.

Otherwise, probably not.

Before iOS 12, ARKit doesn’t provide a way to make any results of its local-world mapping persistent. Everything you do, every point you locate, within an AR session is defined only in the context of that session. If you place some virtual content based on plane detection, hit testing, and/or user input, the frame of reference for that position is relative to where your device was at the beginning of the session.

With no frame of reference that can persist across sessions, there’s no way to position virtual content that’ll have it appear to stay in the same real-world position/orientation after (fully) quitting/restarting the app.

But maybe...

One of the additions from "ARKit 1.5" in iOS 11.3 is sort of an escape valve for this problem: image detection. If your app’s use case involves a known/controlled environment (for example, using virtual overlays to guide visitors in an art museum), and there are some easily recognizable 2D features in that environment (like notable paintings), ARKit can detect their positions.

Once you’ve detected an image anchor that you know is a fixed feature of the environment, you can tell your AR Session to redefine its world coordinate system around that anchor (see setWorldOrigin). After doing that, you effectively have a coordinate system that’s the same across multiple sessions (assuming you detect the same image and set the world origin in each session).

这篇关于ARKit 保存对象位置并在任何下一个会话中查看它的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆