在 ARKit 1.5 中初始检测后如何跟踪图像锚点? [英] How to track image anchors after initial detection in ARKit 1.5?

查看:27
本文介绍了在 ARKit 1.5 中初始检测后如何跟踪图像锚点?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用 ARKit 1.5 进行图像识别,正如我们在 来自 Apple 的示例项目:初始检测后不会跟踪图像锚点,因此创建一个限制平面可视化出现的持续时间的动画.

I'm trying ARKit 1.5 with image recognition and, as we can read in the code of the sample project from Apple: Image anchors are not tracked after initial detection, so create an animation that limits the duration for which the plane visualization appears.

ARImageAnchor 没有 center: vector_float3ARPlaneAnchor 那样,我找不到如何跟踪检测到的图像锚点.

An ARImageAnchor doesn't have a center: vector_float3 like ARPlaneAnchor has, and I cannot find how I can track the detected image anchors.

我想在这个视频中实现类似的目标,即有一个修复图像、按钮、标签等等,停留在检测到的图像之上,但我不明白如何实现这一点.

I would like to achieve something like in this video, that is, to have a fix image, button, label, whatever, staying on top of the detected image, and I don't understand how I can achieve this.

这里是图像检测结果的代码:

Here is the code of the image detection result:

// MARK: - ARSCNViewDelegate (Image detection results)
/// - Tag: ARImageAnchor-Visualizing
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
    guard let imageAnchor = anchor as? ARImageAnchor else { return }
    let referenceImage = imageAnchor.referenceImage
    updateQueue.async {

        // Create a plane to visualize the initial position of the detected image.
        let plane = SCNPlane(width: referenceImage.physicalSize.width,
                         height: referenceImage.physicalSize.height)
        plane.materials.first?.diffuse.contents = UIColor.blue.withAlphaComponent(0.20)
        self.planeNode = SCNNode(geometry: plane)

        self.planeNode?.opacity = 1

        /*
         `SCNPlane` is vertically oriented in its local coordinate space, but
         `ARImageAnchor` assumes the image is horizontal in its local space, so
         rotate the plane to match.
         */
        self.planeNode?.eulerAngles.x = -.pi / 2

        /*
         Image anchors are not tracked after initial detection, so create an
         animation that limits the duration for which the plane visualization appears.
         */

        // Add the plane visualization to the scene.
        if let planeNode = self.planeNode {
            node.addChildNode(planeNode)
        }

        if let imageName = referenceImage.name {
            plane.materials = [SCNMaterial()]
            plane.materials[0].diffuse.contents = UIImage(named: imageName)
        }
    }

    DispatchQueue.main.async {
        let imageName = referenceImage.name ?? ""
        self.statusViewController.cancelAllScheduledMessages()
        self.statusViewController.showMessage("Detected image "\(imageName)"")
    }
}

推荐答案

你已经完成了大部分工作——你的代码在检测到的图像上放置了一个平面,所以很明显你正在那里成功设置中心平面到图像锚点的位置.也许您的第一步应该是更好地理解您拥有的代码...

You’re already most of the way there — your code places a plane atop the detected image, so clearly you have something going on there that successfully sets the center position of the plane to that of the image anchor. Perhaps your first step should be to better understand the code you have...

ARPlaneAnchor 有一个 center(和 extent),因为在 ARKit 最初检测到平面后,平面可以有效地增长.当你第一次得到一个平面锚点时,它的 transform 会告诉你一些水平(或垂直)平面小块的位置和方向.仅此一项就足以让您在该小块表面的中间放置一些虚拟内容.

ARPlaneAnchor has a center (and extent) because planes can effectively grow after ARKit initially detects them. When you first get a plane anchor, its transform tells you the position and orientation of some small patch of flat horizontal (or vertical) surface. That alone is enough for you to place some virtual content in the middle of that small patch of surface.

随着时间的推移,ARKit 会找出更多相同平面的位置,因此平面锚的 extent 变得更大.但是,您可能最初会检测到,例如,桌子的一端,然后识别出更多的远端——这意味着平面不是以检测到的第一个补丁为中心.ARKit 并没有改变锚点的 transform,而是告诉你新的 center(它相对于那个变换).

Over time, ARKit figures out where more of the same flat surface is, so the plane anchor’s extent gets larger. But you might initially detect, say, one end of a table and then recognize more of the far end — that means the flat surface isn’t centered around the first patch detected. Rather than change the transform of the anchor, ARKit tells you the new center (which is relative to that transform).

ARImageAnchor 不会增长——要么 ARKit 一次检测整个图像,要么根本不检测图像.因此,当您检测到图像时,锚点的 transform 会告诉您图像中心的位置和方向.(如果您想知道大小/范围,可以从检测到的参考图像的 physicalSize 中获取,就像示例代码一样.)

An ARImageAnchor doesn’t grow — either ARKit detects the whole image at once or it doesn’t detect the image at all. So when you detect an image, the anchor’s transform tells you the position and orientation of the center of the image. (And if you want to know the size/extent, you can get that from the physicalSize of the detected reference image, like the sample code does.)

因此,要将一些 SceneKit 内容放置在 ARImageAnchor(或任何其他 ARAnchor 子类)的位置,您可以:

So, to place some SceneKit content at the position of an ARImageAnchor (or any other ARAnchor subclass), you can:

  • 只需将其添加为 ARKit 在该委托方法中为您创建的 SCNNode 的子节点.如果你不做任何改变它们,它的位置和方向将匹配拥有它的节点的位置和方向.(这就是您引用的 Apple 示例代码所做的.)

  • Simply add it as a child node of the SCNNode ARKit creates for you in that delegate method. If you don’t do something to change them, its position and orientation will match that of the node that owns it. (This is what the Apple sample code you’re quoting does.)

将其放置在世界空间中(即作为场景 rootNode 的子节点),使用锚点的 transform 获取位置或方向或两者.

Place it in world space (that is, as a child of the scene’s rootNode), using the anchor’s transform to get position or orientation or both.

(您可以从变换矩阵中提取平移——即相对位置:获取最后一列的前三个元素;例如 transform.columns.3 是一个 float4 向量,其 xyz 元素为您的位置,w 元素为 1.)

(You can extract the translation — that is, relative position — from a transform matrix: grab the first three elements of the last column; e.g. transform.columns.3 is a float4 vector whose xyz elements are your position and whose w element is 1.)

您链接的演示视频并没有将东西放入 3D 空间,而是将 2D 用户界面元素放置在屏幕上,其位置跟踪世界空间中锚点相对于 3D 相机的运动.

The demo video you linked to isn’t putting things in 3D space, though — it’s putting 2D UI elements on the screen, whose positions track the 3D camera-relative movement of anchors in world space.

通过使用 ARSKView (ARKit+SpriteKit) 而不是 ARSCNView (ARKit+SceneKit).这让您可以将 2D 精灵与世界空间中的 3D 位置相关联,然后 ARSKView 自动移动和缩放它们,使它们看起来保持附着在这些 3D 位置上.这是一种常见的 3D 图形技巧,称为广告牌",其中 2D 精灵始终保持直立并面向相机,但会四处移动并缩放以匹配 3D 视角.

You can easily get that kind of effect (to a first approximation) by using ARSKView (ARKit+SpriteKit) instead of ARSCNView (ARKit+SceneKit). That lets you associate 2D sprites with 3D positions in world space, and then ARSKView automatically moves and scales them so that they appear to stay attached to those 3D positions. It’s a common 3D graphics trick called "billboarding", where the 2D sprite is always kept upright and facing the camera, but moved around and scaled to match 3D perspective.

如果这就是您正在寻找的效果,那么也有一个 App(示例代码).在 ARKit 中实时使用视觉 示例主要是关于其他主题,但它确实展示了如何使用 ARSKView 来显示与 ARAnchor 位置关联的标签.(正如您在上面看到的,无论您使用哪个 ARAnchor 子类,放置内容以匹配锚点位置都是相同的.)这是他们代码中的关键部分:

If that’s the effect you’re looking for, there’s an App(le sample code) for that, too. The Using Vision in Real Time with ARKit example is mostly about other topics, but it does show how to use ARSKView to display labels associated with ARAnchor positions. (And as you’ve seen above, placing content to match an anchor position is the same no matter which ARAnchor subclass you’re using.) Here’s the key bit in their code:

func view(_ view: ARSKView, didAdd node: SKNode, for anchor: ARAnchor) {
    // ... irrelevant bits omitted... 
    let label = TemplateLabelNode(text: labelText)
    node.addChild(label)
}

也就是说,只需实现 ARSKView didAdd 委托方法,并添加您想要的任何 SpriteKit 节点作为 ARKit 提供的一个子节点.

That is, just implement the ARSKView didAdd delegate method, and add whatever SpriteKit node you want as a child of the one ARKit provides.

然而,演示视频不仅仅是精灵广告牌:它与绘画相关的标签不仅在 2D 方向上保持固定,而且在 2D 尺寸上保持固定(也就是说,它们不会像广告牌那样缩放以模拟透视)精灵可以).更重要的是,它们似乎是 UIKit 控件,具有继承的全套交互行为,而不仅仅是像 SpriteKit 那样的 2D 图像.

However, the demo video does more than just sprite billboarding: the labels it associates with paintings not only stay fixed in 2D orientation, they stay fixed in 2D size (that is, they don’t scale to simulate perspective like a billboarded sprite does). What’s more, they seem to be UIKit controls, with the full set of inherited interactive behaviors that entails, not just 2D images the likes of which are ways to do with SpriteKit.

Apple 的 API 没有提供一种开箱即用"的直接方法,但是想象一些可以将 API 部分组合在一起以获得这种结果的方法并不容易.以下是一些可供探索的途径:

Apple’s APIs don’t provide a direct way to do this "out of the box", but it’s not a stretch to imagine some ways one could put API pieces together to get this kind of result. Here are a couple of avenues to explore:

  • 如果您不需要 UIKit 控件,您可能可以在 SpriteKit 中完成所有操作,使用约束来匹配 ARSKView 提供的广告牌"节点的位置,但不匹配它们的比例.这可能看起来像这样(未经测试,警告空客):

  • If you don’t need UIKit controls, you can probably do it all in SpriteKit, using constraints to match the position of the "billboarded" nodes ARSKView provides but not their scale. That’d probably look something like this (untested, caveat emptor):

func view(_ view: ARSKView, didAdd node: SKNode, for anchor: ARAnchor) {
    let label = MyLabelNode(text: labelText) // or however you make your label
    view.scene.addChild(label)

    // constrain label to zero distance from ARSKView-provided, anchor-following node
    let zeroDistanceToAnchor = SKConstraint.distance(SKRange(constantValue: 0), to: node)
    label.constraints = [ zeroDistanceToAnchor ]
}

  • 如果您需要 UIKit 元素,请将 ARSKView 设为视图控制器的子视图(而不是根视图),并将这些 UIKit 元素设为其他子视图.然后,在你的 SpriteKit 场景的 update 方法中,通过你的 ARAnchor-following 节点,将它们的位置从 SpriteKit 场景坐标转换为 UIKit 视图坐标,并设置你的 UIKit 的位置相应的元素.(该演示似乎正在使用弹出框,因此您不会将其作为子视图进行管理……您可能会更新 sourceRect 用于每个弹出框.)涉及的内容更多,因此细节超出了这个已经很长的答案的范围.p>

  • If you want UIKit elements, make the ARSKView a child view of your view controller (not the root view), and make those UIKit elements other child views. Then, in your SpriteKit scene’s update method, go through your ARAnchor-following nodes, convert their positions from SpriteKit scene coordinates to UIKit view coordinates, and set the positions of your UIKit elements accordingly. (The demo appears to be using popovers, so those you wouldn’t be managing as child views... you’d probably be updating the sourceRect for each popover.) That’s a lot more involved, so the details are beyond the scope of this already long answer.

    最后一点...希望这个冗长的答案对您问题的关键问题有所帮助(了解锚点位置并在相机移动时放置跟随它们的 3D 或 2D 内容).

    A final note... hopefully this long-winded answer has been helpful with the key issues of your question (understanding anchor positions and placing 3D or 2D content that follows them as the camera moves).

    但是要澄清并警告您问题中的一些关键:

    But to clarify and give a warning about some of the key words early in your question:

    当 ARKit 说它在检测后不跟踪图像时,这意味着它不知道图像何时/是否移动(相对于它周围的世界).ARKit 只报告一次图像的位置,因此该位置甚至不会从 ARKit 如何继续改进对您周围世界和您在其中的位置的估计中受益.例如,如果图像在墙上,则报告的图像位置/方向可能与墙上的垂直平面检测结果不一致(尤其是随着时间的推移,随着平面估计的改进).

    When ARKit says it doesn’t track images after detection, that means it doesn’t know when/if the image moves (relative to the world around it). ARKit reports an image’s position only once, so that position doesn’t even benefit from how ARKit continues to improve its estimates of the world around you and your position in it. For example, if an image is on a wall, the reported position/orientation of the image might not line up with a vertical plane detection result on the wall (especially over time, as the plane estimate improves).

    更新:在 iOS 12 中,您可以启用检测到的图像的实时"跟踪.但是一次可以跟踪的数量是有限的,因此本建议的其余部分可能仍然适用.

    Update: In iOS 12, you can enable "live" tracking of detected images. But there are limits on how many you can track at once, so the rest of this advice may still apply.

    这并不意味着您不能放置看似跟踪"世界空间中静态位置的内容,即随着相机移动而在屏幕上移动以跟随它.

    This doesn’t mean that you can’t place content that appears to "track" that static-in-world-space position, in the sense of moving around on the screen to follow it as your camera moves.

    但这确实意味着,如果您尝试依赖于对图像位置的高精度、实时估计,您的用户体验可能会受到影响.因此,不要尝试在您的绘画周围放置虚拟框架,或者用动画版本替换绘画本身.但是有一个带有箭头的文本标签大致指向图像在空间中的位置是很棒的.

    But it does mean your user experience may suffer if you try to do things that rely on having a high-precision, real-time estimate of the image’s position. So don’t, say, try to put a virtual frame around your painting, or replace the painting with an animated version of itself. But having a text label with an arrow pointing to roughly where the image is in space is great.

    这篇关于在 ARKit 1.5 中初始检测后如何跟踪图像锚点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆