ARKit&现实作曲家-如何使用图像坐标锚定场景 [英] ARKit & Reality composer - how to Anchor scene using image coordinates

查看:144
本文介绍了ARKit&现实作曲家-如何使用图像坐标锚定场景的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经编写了代码来初始化3个Reality Composer场景中的一个,具体取决于当月的某天按下按钮.

I have written code to initialise one of 3 Reality Composer scenes when a button is pressed depending on the day of the month.

一切正常.

Reality Composer场景使用图像检测将对象放置在环境中,但是当前,一旦图像离开相机视图,这些对象就会消失.

The Reality Composer scenes use an image detection to place the objects within the environment but currently as soon as the image is out of the camera view the objects disappear.

我想将场景锚定为首先检测到图像的根节点,以便即使图像触发器不在相机视图中,用户也可以环视四周并维护对象.

I would like to anchor the scene with the root node being where the image is first detected so that users can look around the scene and the objects are maintained even when the image trigger is not in the camera view.

我尝试在下面传递一个func渲染器代码,但出现错误,指出视图控制器类没有.planeNode

I tried passing in a func renderer code below but i get errors saying the view controller class doesn't have the .planeNode

 func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
            guard let imageAnchor = anchor as? ARImageAnchor else { return }
            let referenceImage = imageAnchor.referenceImage

                // Create a plane to visualize the initial position of the detected image.
                let plane = SCNPlane(width: referenceImage.physicalSize.width,
                                 height: referenceImage.physicalSize.height)
                plane.materials.first?.diffuse.contents = UIColor.blue.withAlphaComponent(0.20)
                self.planeNode = SCNNode(geometry: plane)

                self.planeNode?.opacity = 1

                /*
                 `SCNPlane` is vertically oriented in its local coordinate space, but
                 `ARImageAnchor` assumes the image is horizontal in its local space, so
                 rotate the plane to match.
                 */
                self.planeNode?.eulerAngles.x = -.pi / 2

                /*
                 Image anchors are not tracked after initial detection, so create an
                 animation that limits the duration for which the plane visualization appears.
                 */

                // Add the plane visualization to the scene.
                if let planeNode = self.planeNode {
                    node.addChildNode(planeNode)
                }

                if let imageName = referenceImage.name {
                    plane.materials = [SCNMaterial()]
                    plane.materials[0].diffuse.contents = UIImage(named: imageName)
                }

这是我的代码

import UIKit
import RealityKit
import ARKit
import SceneKit



class ViewController: UIViewController {



@IBOutlet var move: ARView!
    @IBOutlet var arView: ARView!

    var ARBorealAnchor3: ARboreal.ArBoreal3!

    var ARBorealAnchor2: ARboreal.ArBoreal2!

    var ARBorealAnchor: ARboreal.ArBoreal!

    var Date1 = 1




    override func viewDidLoad() {
        super.viewDidLoad()



        func getSingle() {
            let date = Date()
            let calendar = Calendar.current
            let day = calendar.component(.day, from: date)
            Date1 = day
        }

     getSingle()

      ARBorealAnchor = try! ARboreal.loadArBoreal()

        ARBorealAnchor2 = try!
        ARboreal.loadArBoreal2()

        ARBorealAnchor3 = try!
              ARboreal.loadArBoreal3()



        if Date1 == 24 {
            arView.scene.anchors.append(ARBorealAnchor)
        }
        if Date1 == 25 {
            arView.scene.anchors.append(ARBorealAnchor2)
        }
        if Date1 == 26 {
            arView.scene.anchors.append(ARBorealAnchor3)
        }
    }
}

任何帮助将不胜感激.

干杯, 丹尼尔·萨维奇

Cheers, Daniel Savage

推荐答案

正在发生的事情是,当图像锚点超出视线范围时,AnchorEntity不再处于锚定状态,然后RealityKit将停止渲染它及其所有后代.

What is happening is that when the image anchor goes out of view the AnchorEntity becomes unanchored and RealityKit will then stop rendering it and all its descendants.

解决此问题的一种方法是,将图像锚点和要渲染的内容分开,在代码中手动添加图像锚点,然后在第一次检测到图像锚点时,将内容添加到场景中的其他位置世界锚.更新图像锚点转换后,更新您的世界锚点以进行匹配.

One way to work around this could be to just separate your image anchor and content you want to render, add the image anchor manually in code, then when the image anchor is first detected, add your content to the scene under a different world anchor. When the image anchor transform is updated, update your world anchor to match.

这样,您可以在可见时使用图像锚点来获取最新的变换,但是当图像消失时,内容的呈现就不会受到约束.如下所示(您必须创建一个称为ARTest的AR资源组,并在其中添加一个名为"test"的图像,以使锚点正常工作):

That way you can use the image anchor when it is visible to get the latest transform, but when it disappears the rendering of the content is not tied to it. Something like below (you will have to create an AR Resource Group called ARTest and add an image to it named "test" to get the anchor to work):

import ARKit
import SwiftUI
import RealityKit
import Combine

struct ContentView : View {
    var body: some View {
        return ARViewContainer().edgesIgnoringSafeArea(.all)
    }
}

let arDelegate = SessionDelegate()

struct ARViewContainer: UIViewRepresentable {

  func makeUIView(context: Context) -> ARView {

    let arView = ARView(frame: .zero)

    arDelegate.set(arView: arView)
    arView.session.delegate = arDelegate

    // Create an image anchor, add it to the scene. We won't add any
    // rendering content to the anchor, it will be used only for detection
    let imageAnchor = AnchorEntity(.image(group: "ARTest", name: "test"))
    arView.scene.anchors.append(imageAnchor)

    return arView
  }

  func updateUIView(_ uiView: ARView, context: Context) {}
}

final class SessionDelegate: NSObject, ARSessionDelegate {
  var arView: ARView!
  var rootAnchor: AnchorEntity?

  func set(arView: ARView) {
    self.arView = arView
  }

  func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {

    // If we already added the content to render, ignore
    if rootAnchor != nil {
       return
    }

    // Make sure we are adding to an image anchor. Assuming only
    // one image anchor in the scene for brevity.
    guard anchors[0] is ARImageAnchor else {
      return
    }

    // Create the entity to render, could load from your experience file here
    // this will render at the center of the matched image
    rootAnchor = AnchorEntity(world: [0,0,0])
    let ball = ModelEntity(
      mesh: MeshResource.generateBox(size: 0.01),
      materials: [SimpleMaterial(color: .red, isMetallic: false)]
    )
    rootAnchor!.addChild(ball)

    // Just add another model to show how it remains in the scene even
    // when the tracking image is out of view.
    let ball2 = ModelEntity(
      mesh: MeshResource.generateBox(size: 0.10),
      materials: [SimpleMaterial(color: .orange, isMetallic: false)]
    )
    ball.addChild(ball2)
    ball2.position = [0, 0, 1]

    arView.scene.addAnchor(rootAnchor!)
  }

  func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
    guard let rootAnchor = rootAnchor else {
      return
    }

    // Code is assuming you only have one image anchor for brevity
    guard let imageAnchor = anchors[0] as? ARImageAnchor else {
      return
    }

    if !imageAnchor.isTracked {
      return
    }

    // Update our fixed anchor to image transform
    rootAnchor.transform = Transform(matrix: imageAnchor.transform)
  }

}

#if DEBUG
struct ContentView_Previews : PreviewProvider {
  static var previews: some View {
    ContentView()
  }
}
#endif

注意:当ARKit尝试计算准确的图像平面时,ARImageAnchor的变换似乎经常更新(例如,内容似乎在正确的位置,但是z值不正确) ,请确保您在AR资源组中的图像尺寸正确,以使图像获得更好的跟踪.

NOTE: It seems like the transform for an ARImageAnchor updates frequently as you move around as ARKit is trying to calculate the accurate image plane (content might seem like it is in the right place but the z value is not accurate for example), make sure your image dimensions are accurate in the AR resource group for the image to get better tracking.

这篇关于ARKit&现实作曲家-如何使用图像坐标锚定场景的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆