如何在ARKit 1.5中初始检测后跟踪图像锚点? [英] How to track image anchors after initial detection in ARKit 1.5?

查看:715
本文介绍了如何在ARKit 1.5中初始检测后跟踪图像锚点?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试 ARKit 1.5 进行图像识别,我们可以在来自Apple的示例项目
初始检测后不会跟踪图像锚点,因此创建
动画,限制平面可视化的持续时间出现。

I'm trying ARKit 1.5 with image recognition and, as we can read in the code of the sample project from Apple: Image anchors are not tracked after initial detection, so create an animation that limits the duration for which the plane visualization appears.

ARImageAnchor 没有中心:vector_float3 喜欢 ARPlaneAnchor ,我找不到如何跟踪检测到的图像锚点。

An ARImageAnchor doesn't have a center: vector_float3 like ARPlaneAnchor has, and I cannot find how I can track the detected image anchors.

我想在这个视频中实现这样的目标,即拥有修复图像,按钮,标签,等等,保持在检测到的图像之上,我不明白我是如何实现这一目标的。

I would like to achieve something like in this video, that is, to have a fix image, button, label, whatever, staying on top of the detected image, and I don't understand how I can achieve this.

这是代码图像检测结果:

Here is the code of the image detection result:

// MARK: - ARSCNViewDelegate (Image detection results)
/// - Tag: ARImageAnchor-Visualizing
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
    guard let imageAnchor = anchor as? ARImageAnchor else { return }
    let referenceImage = imageAnchor.referenceImage
    updateQueue.async {

        // Create a plane to visualize the initial position of the detected image.
        let plane = SCNPlane(width: referenceImage.physicalSize.width,
                         height: referenceImage.physicalSize.height)
        plane.materials.first?.diffuse.contents = UIColor.blue.withAlphaComponent(0.20)
        self.planeNode = SCNNode(geometry: plane)

        self.planeNode?.opacity = 1

        /*
         `SCNPlane` is vertically oriented in its local coordinate space, but
         `ARImageAnchor` assumes the image is horizontal in its local space, so
         rotate the plane to match.
         */
        self.planeNode?.eulerAngles.x = -.pi / 2

        /*
         Image anchors are not tracked after initial detection, so create an
         animation that limits the duration for which the plane visualization appears.
         */

        // Add the plane visualization to the scene.
        if let planeNode = self.planeNode {
            node.addChildNode(planeNode)
        }

        if let imageName = referenceImage.name {
            plane.materials = [SCNMaterial()]
            plane.materials[0].diffuse.contents = UIImage(named: imageName)
        }
    }

    DispatchQueue.main.async {
        let imageName = referenceImage.name ?? ""
        self.statusViewController.cancelAllScheduledMessages()
        self.statusViewController.showMessage("Detected image "\(imageName)"")
    }
}


推荐答案

你已经大部分时间了 - 你的代码在上面放了一架飞机检测到的图像,很明显你有一些成功地将平面的中心位置设置为图像锚点的位置。也许你的第一步应该是更好地理解你拥有的代码...

You’re already most of the way there — your code places a plane atop the detected image, so clearly you have something going on there that successfully sets the center position of the plane to that of the image anchor. Perhaps your first step should be to better understand the code you have...

ARPlaneAnchor 有一个 center (以及范围)因为在ARKit最初检测到飞机后,飞机可以有效增长。当你第一次得到一个飞机锚时,它的变换会告诉你一些平面水平(或垂直)表面的小块的位置和方向。仅此一项就足以让您将一些虚拟内容放置在那小块表面的中间。

ARPlaneAnchor has a center (and extent) because planes can effectively grow after ARKit initially detects them. When you first get a plane anchor, its transform tells you the position and orientation of some small patch of flat horizontal (or vertical) surface. That alone is enough for you to place some virtual content in the middle of that small patch of surface.

随着时间的推移,ARKit会计算出更多同一平面的位置,因此飞机锚点的范围会变大。但是你最初可能会检测到一个表的一端然后识别更多的远端 - 这意味着平面不会在检测到的第一个补丁周围。 ARKit告诉你新的中心相对于那个转换)。

Over time, ARKit figures out where more of the same flat surface is, so the plane anchor’s extent gets larger. But you might initially detect, say, one end of a table and then recognize more of the far end — that means the flat surface isn’t centered around the first patch detected. Rather than change the transform of the anchor, ARKit tells you the new center (which is relative to that transform).

一个 ARImageAnchor 没有增长 - ARKit一次检测到整个图像或者根本不检测图像。因此,当您检测到图像时,锚点转换会告诉您图像中心的位置和方向。 (如果你想知道大小/范围,你可以从检测到的参考图像的 physicalSize 中得到它,就像示例代码一样。)

An ARImageAnchor doesn’t grow — either ARKit detects the whole image at once or it doesn’t detect the image at all. So when you detect an image, the anchor’s transform tells you the position and orientation of the center of the image. (And if you want to know the size/extent, you can get that from the physicalSize of the detected reference image, like the sample code does.)

因此,将一些SceneKit内容放在 ARImageAnchor (或任何其他 ARAnchor 子类),你可以:

So, to place some SceneKit content at the position of an ARImageAnchor (or any other ARAnchor subclass), you can:


  • 只需将其添加为的子节点SCNNode ARKit在该委托方法中为您创建。如果您没有做某些事情来改变它们,它的位置和方向将与拥有它的节点的位置和方向相匹配。 (这就是你引用的Apple示例代码所做的。)

  • Simply add it as a child node of the SCNNode ARKit creates for you in that delegate method. If you don’t do something to change them, its position and orientation will match that of the node that owns it. (This is what the Apple sample code you’re quoting does.)

将它放在世界空间中(也就是说,作为场景的孩子 rootNode ),使用锚的转换来获取位置或方向或两者。

Place it in world space (that is, as a child of the scene’s rootNode), using the anchor’s transform to get position or orientation or both.

(你可以从变换矩阵中提取翻译 - 即相对位置:抓住最后一列的前三个元素;例如 transform.columns.3 是一个 float4 向量,其xyz元素是你的位置,其w元素是1。)

(You can extract the translation — that is, relative position — from a transform matrix: grab the first three elements of the last column; e.g. transform.columns.3 is a float4 vector whose xyz elements are your position and whose w element is 1.)

您链接到的演示视频没有放入但是3D空间 - 它将2D UI元素放在屏幕上,其位置跟踪世界空间中锚点的3D相机相对移动。

The demo video you linked to isn’t putting things in 3D space, though — it’s putting 2D UI elements on the screen, whose positions track the 3D camera-relative movement of anchors in world space.

您可以轻松获得<通过使用 ARSKView (ARKit + SpriteKit)而不是 ARSCNView 种 c>(ARKit + SceneKit)。这使得您可以将2D精灵与世界空间中的3D位置相关联,然后 ARSKView 会自动移动并缩放它们,使它们看起来保持与这些3D位置的连接。这是一种常见的3D图形技巧,称为广告牌,其中2D精灵始终保持直立并面向相机,但移动并缩放以匹配3D视角。

You can easily get that kind of effect (to a first approximation) by using ARSKView (ARKit+SpriteKit) instead of ARSCNView (ARKit+SceneKit). That lets you associate 2D sprites with 3D positions in world space, and then ARSKView automatically moves and scales them so that they appear to stay attached to those 3D positions. It’s a common 3D graphics trick called "billboarding", where the 2D sprite is always kept upright and facing the camera, but moved around and scaled to match 3D perspective.

如果这就是你正在寻找的效果,也有一个App(le示例代码)。 使用ARKit实时视觉示例主要是关于其他主题,但它 显示如何使用 ARSKView 显示与 ARAnchor 位置相关联的标签。 (正如你在上面看到的那样,无论你使用哪个 ARAnchor 子类,放置匹配锚位置的内容都是一样的。)这是他们代码中的关键位:

If that’s the effect you’re looking for, there’s an App(le sample code) for that, too. The Using Vision in Real Time with ARKit example is mostly about other topics, but it does show how to use ARSKView to display labels associated with ARAnchor positions. (And as you’ve seen above, placing content to match an anchor position is the same no matter which ARAnchor subclass you’re using.) Here’s the key bit in their code:

func view(_ view: ARSKView, didAdd node: SKNode, for anchor: ARAnchor) {
    // ... irrelevant bits omitted... 
    let label = TemplateLabelNode(text: labelText)
    node.addChild(label)
}

也就是说,只需实现 ARSKView didAdd 委托方法,并添加您想要的任何SpriteKit节点作为ARKit提供的子节点。

That is, just implement the ARSKView didAdd delegate method, and add whatever SpriteKit node you want as a child of the one ARKit provides.

然而,演示视频不仅仅是精灵广告牌:它与绘画相关联的标签不仅保持固定在2D方向,它们保持固定在2D大小(也就是说,它们不能缩放以模拟像广告牌精灵一样的透视)。更重要的是,它们似乎是UIKit控件,具有完整的继承交互行为,而不仅仅是2D图像,这些图像与SpriteKit相关。

However, the demo video does more than just sprite billboarding: the labels it associates with paintings not only stay fixed in 2D orientation, they stay fixed in 2D size (that is, they don’t scale to simulate perspective like a billboarded sprite does). What’s more, they seem to be UIKit controls, with the full set of inherited interactive behaviors that entails, not just 2D images the likes of which are ways to do with SpriteKit.

Apple的API没有直接提供开箱即用的方法,但想象一下将API组合在一起以获得这种结果的方式并不是一件容易的事。以下是一些探索途径:

Apple’s APIs don’t provide a direct way to do this "out of the box", but it’s not a stretch to imagine some ways one could put API pieces together to get this kind of result. Here are a couple of avenues to explore:


  • 如果您不需要UIKit控件,您可以在SpriteKit,使用约束来匹配广告牌节点的位置 ARSKView 提供但不是它们的比例。这可能看起来像这样(未经测试,警告的经纪人):

  • If you don’t need UIKit controls, you can probably do it all in SpriteKit, using constraints to match the position of the "billboarded" nodes ARSKView provides but not their scale. That’d probably look something like this (untested, caveat emptor):

func view(_ view: ARSKView, didAdd node: SKNode, for anchor: ARAnchor) {
    let label = MyLabelNode(text: labelText) // or however you make your label
    view.scene.addChild(label)

    // constrain label to zero distance from ARSKView-provided, anchor-following node
    let zeroDistanceToAnchor = SKConstraint.distance(SKRange(constantValue: 0), to: node)
    label.constraints = [ zeroDistanceToAnchor ]
}


  • 如果您想要UIKit元素,请创建 ARSKView 视图控制器的子视图(不是根视图),并使这些UIKit元素成为其他子视图。然后,在您的SpriteKit场景的更新方法中,浏览<$​​ c $ c> ARAnchor - 以下节点,从SpriteKit场景坐标转换它们的位置到UIKit视图坐标,并相应地设置UIKit元素的位置。 (该演示似乎使用了弹出窗口,因此您不会将其视为子视图...您可能正在更新 sourceRect 每个popover。)这涉及更多,所以细节超出了这个范围已久的回答。

  • If you want UIKit elements, make the ARSKView a child view of your view controller (not the root view), and make those UIKit elements other child views. Then, in your SpriteKit scene’s update method, go through your ARAnchor-following nodes, convert their positions from SpriteKit scene coordinates to UIKit view coordinates, and set the positions of your UIKit elements accordingly. (The demo appears to be using popovers, so those you wouldn’t be managing as child views... you’d probably be updating the sourceRect for each popover.) That’s a lot more involved, so the details are beyond the scope of this already long answer.

    最后的注意事项...希望这个冗长的回答对你的问题的关键问题很有帮助(理解锚定位置并在摄像机移动时放置跟随它们的3D或2D内容)。

    A final note... hopefully this long-winded answer has been helpful with the key issues of your question (understanding anchor positions and placing 3D or 2D content that follows them as the camera moves).

    但是要在问题的早期澄清并发出关于某些关键的警告:

    But to clarify and give a warning about some of the key words early in your question:

    当ARKit说它没有检测后跟踪图像,这意味着它不知道图像何时/是否移动(相对于周围的世界)。 ARKit仅报告一次图像的位置,因此该位置甚至不会受益于ARKit如何继续改进其对您周围世界的估计以及您在其中的位置。例如,如果图像位于墙上,则报告的图像位置/方向可能与墙上的垂直平面检测结果不一致(特别是随着时间的推移,平面估计值会提高)。

    When ARKit says it doesn’t track images after detection, that means it doesn’t know when/if the image moves (relative to the world around it). ARKit reports an image’s position only once, so that position doesn’t even benefit from how ARKit continues to improve its estimates of the world around you and your position in it. For example, if an image is on a wall, the reported position/orientation of the image might not line up with a vertical plane detection result on the wall (especially over time, as the plane estimate improves).


    更新:在iOS 12中,您可以启用对搜索到的图像的实时跟踪。但是你可以一次跟踪多少是有限制的,所以这个建议的其余部分仍然适用。

    Update: In iOS 12, you can enable "live" tracking of detected images. But there are limits on how many you can track at once, so the rest of this advice may still apply.

    这不是意味着您不能放置看似跟踪静态世界空间位置的内容,在屏幕上移动以随着相机移动而跟随它。

    This doesn’t mean that you can’t place content that appears to "track" that static-in-world-space position, in the sense of moving around on the screen to follow it as your camera moves.

    但这确实意味着如果您尝试做依赖于对图像位置进行高精度,实时估计的事情,您的用户体验可能会受到影响。所以不要试图在你的绘画周围放置一个虚拟框架,或用自己的动画版本替换画面。但是,如果文本标签的箭头指向大致,那么图像在空间中的位置非常棒。

    But it does mean your user experience may suffer if you try to do things that rely on having a high-precision, real-time estimate of the image’s position. So don’t, say, try to put a virtual frame around your painting, or replace the painting with an animated version of itself. But having a text label with an arrow pointing to roughly where the image is in space is great.

    这篇关于如何在ARKit 1.5中初始检测后跟踪图像锚点?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

  • 查看全文
    登录 关闭
    扫码关注1秒登录
    发送“验证码”获取 | 15天全站免登陆