将 3D 对象平行于 estametedVerticalPlane 检测到的垂直平面对齐 [英] Align 3D object parallel to vertical plane detected by estametedVerticalPlane

查看:19
本文介绍了将 3D 对象平行于 estametedVerticalPlane 检测到的垂直平面对齐的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有这本书,但我目前正在重新混合 AR/VR 周免费的视频教程中的家具应用程序.

我想要一个与检测到的墙壁/垂直平面对齐的 3D 墙壁画布.

事实证明这比我想象的要难.定位不是问题.就像家具放置应用程序一样,您只需获取 hittest.worldtransform 的 column3 并提供新的几何体向量 3 作为位置.

但我不知道我必须做什么才能让我的 3D 对象在对齐的检测平面上旋转以面向前方.因为我有一个画布对象,所以照片在画布的一侧.放置时,照片始终朝外.

我想过将画布任意旋转以面向前方,但只有当我向北看并将画布放在我右边的墙上时才正确.

我在网上尝试了很多解决方案,但只有一个总是使用 .existingPlaneUsingExtent.用于垂直平面检测.这允许您从hittest.anchor?作为ARPlaneAnchor.如果您在使用 .estimatedVerticalPlane 时尝试这个 anchor?是 nil

我也没有继续沿着这条路线走下去,因为我的水平 3D 对象开始被放置在空中.这可能归结为控制流逻辑,但在垂直画布放置工作之前我会忽略它.

我目前的思路是获取画布的正面向量,然后将其旋转到检测到的垂直平面的正面向量UIImage 或hittest 点.

我如何从 3D 点获得前向矢量.或者从网格图像中获取正面向量,即 UIImageARKit 检测到垂直墙时作为叠加层放置?

这是一个例子.画布显示画布的背面,与检测到的垂直平面(即列)不平行.但是有一个在此处放置海报"网格,这是我希望画布与其对齐的网格,并且我能够看到照片.

我尝试过的东西.使用 .estimatedVerticalPlane

I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.

I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.

This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.

But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.

I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.

I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the hittest.anchor? as ARPlaneAnchor. If you try this when using .estimatedVerticalPlane the anchor? is nil

I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.

My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.

How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?

Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.

Things I have tried. using .estimatedVerticalPlane ARKit estimatedVerticalPlane hit test get plane rotation

I don't know how to correctly apply this matrix and eular angle results from the SO answer.

my add picture function.

func addPicture(hitTestResult: ARHitTestResult) {
    // I would like to convert estimate hitTest to a anchorpoint
    // it is easier to rotate a node to a anchorpoint over calculating eularAngles
    // we have all detected anchors in the _Renderer SCNNode. however there are

    // Get the current furniture item, correct its position if necessary,
    // and add it to the scene.
    let picture = pictureSettings.currentPicturePiece()

    //look for the vertical node geometry in verticalAnchors
    if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
      if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
        //code removed as a .estimatedVerticalPlane hittestResult doesn't get here
      }
    }else{
      // Transform hitresult to world coords
      let worldTransform = hitTestResult.worldTransform
      let anchoredNodeOrientation = worldTransform.eulerAngles
        picture.rotation.y =
          -.pi * anchoredNodeOrientation.y
        //set the transform matirs
        let positionMatris = worldTransform.columns.3
        let position = SCNVector3 (
          positionMatris.x,
          positionMatris.y,
          positionMatris.z
        )
        picture.position = position + pictureSettings.currentPictureOffset();

    }
    //parented to rootNode of the scene
    sceneView.scene.rootNode.addChildNode(picture)
  }

Thanks for any help available.

Edited: I have notice the 'handness' or the 3D model isn't correct/ is opposite? Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?

解决方案

You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.

For example your hit test might look something like:

@objc func tapped(_ sender: UITapGestureRecognizer) {
    let location = sender.location(in: sender.view)
    guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
          let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
          planeAnchor.alignment == .vertical else { return }
    let anchor = ARAnchor(transform: hitTestResult.worldTransform)
    sceneView.session.add(anchor: anchor)
}

Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.

Next, we need to vend our SCNNode for the newly added ARAnchor. For example

func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
    if anchor is ARPlaneAnchor {
        let anchorNode = SCNNode()
        anchorNode.name = "anchor"
        return anchorNode
    } else {
        let plane = SCNPlane(width: 0.67, height: 1.0)
        plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
        let planeNode = SCNNode(geometry: plane)
        planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
        let node = SCNNode()
        node.addChildNode(planeNode)
        return node
    }
}

Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.

Should result in something like the following.

这篇关于将 3D 对象平行于 estametedVerticalPlane 检测到的垂直平面对齐的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆