将3D对象与estametedVerticalPlane检测到的垂直平面平行对齐 [英] Align 3D object parallel to vertical plane detected by estametedVerticalPlane

查看:91
本文介绍了将3D对象与estametedVerticalPlane检测到的垂直平面平行对齐的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有这本书,但是我现在正在从AR/VR周免费的视频教程中重新混合家具应用程序.

I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.

我想让3D墙面画布与检测到的墙/垂直平面对齐.

I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.

事实证明,这比我想象的要难.定位不是问题.就像家具放置应用程序一样,您只需获取hittest.worldtransform的column3并为该向量3提供新的几何体位置.

This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.

但是我不知道我该怎么做才能使3D对象旋转到在对齐的检测平面上面向前方.因为我有一个画布对象,所以照片在画布的一侧.在放置时,照片始终朝外.

But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.

我曾考虑过对画布进行任意旋转以使其面向前方,但这只有在我向北看并将画布放在我右边的墙上时才是正确的.

I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.

我已经在线尝试了很多解决方案,但其中一个始终使用.existingPlaneUsingExtent.用于垂直平面检测.这使您可以从以下位置获取ARPlaneAnchor: hittest.anchor? as ARPlaneAnchor. 如果在使用.estimatedVerticalPlane时尝试使用anchor?是nil

I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the hittest.anchor? as ARPlaneAnchor. If you try this when using .estimatedVerticalPlane the anchor? is nil

当我的水平3D对象开始被置于空中时,我也没有继续沿着这条路线走.这可能取决于控制流逻辑,但在垂直画布放置起作用之前,我一直忽略它.

I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.

我当前的思路是获取画布的前向矢量,并将其朝向检测到的垂直平面的前向矢量或击键点旋转.

My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.

如何从3D点获得前向矢量.或从网格图像中获得前向量,即UIImageARKit检测到垂直墙时放置为覆盖图?

How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?

这里是一个例子.画布显示画布的背面,并且与检测到的垂直列(即列)不平行.但是有一个在此处放置海报"网格,这是我希望画布与之对齐并能够看到照片的地方.

Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.

我尝试过的事情. 使用.estimatedVerticalPlane ARKit EstimatedVerticalPlane命中测试获得飞机旋转

Things I have tried. using .estimatedVerticalPlane ARKit estimatedVerticalPlane hit test get plane rotation

我不知道如何正确应用该矩阵和SO答案得出的欧拉角结果.

I don't know how to correctly apply this matrix and eular angle results from the SO answer.

我添加图片功能.

func addPicture(hitTestResult: ARHitTestResult) {
    // I would like to convert estimate hitTest to a anchorpoint
    // it is easier to rotate a node to a anchorpoint over calculating eularAngles
    // we have all detected anchors in the _Renderer SCNNode. however there are

    // Get the current furniture item, correct its position if necessary,
    // and add it to the scene.
    let picture = pictureSettings.currentPicturePiece()

    //look for the vertical node geometry in verticalAnchors
    if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
      if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
        //code removed as a .estimatedVerticalPlane hittestResult doesn't get here
      }
    }else{
      // Transform hitresult to world coords
      let worldTransform = hitTestResult.worldTransform
      let anchoredNodeOrientation = worldTransform.eulerAngles
        picture.rotation.y =
          -.pi * anchoredNodeOrientation.y
        //set the transform matirs
        let positionMatris = worldTransform.columns.3
        let position = SCNVector3 (
          positionMatris.x,
          positionMatris.y,
          positionMatris.z
        )
        picture.position = position + pictureSettings.currentPictureOffset();

    }
    //parented to rootNode of the scene
    sceneView.scene.rootNode.addChildNode(picture)
  }

感谢任何可用的帮助.

我注意到手形"或3D模型不正确/相反? 正Z指向左侧,正X面向相机,因为我期望的是模型的正面.这是问题吗?

Edited: I have notice the 'handness' or the 3D model isn't correct/ is opposite? Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?

推荐答案

您应该尝试避免使用世界坐标将节点直接添加到场景中.相反,您应该通过添加ARAnchor来通知ARSession感兴趣的区域,然后使用会话回调为添加的锚点出售SCNNode.

You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.

例如,您的点击测试可能类似于:

For example your hit test might look something like:

@objc func tapped(_ sender: UITapGestureRecognizer) {
    let location = sender.location(in: sender.view)
    guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
          let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
          planeAnchor.alignment == .vertical else { return }
    let anchor = ARAnchor(transform: hitTestResult.worldTransform)
    sceneView.session.add(anchor: anchor)
}

此处识别出的轻击手势用于检测ARSCNView中的轻击.当检测到轻击时,执行命中测试以查找现有平面和估计平面.如果平面是垂直的,则在命中测试结果的worldTransform中添加ARAnchor,然后将锚点添加到ARSession.这会将该点注册为ARSession的关注区域,因此在其中添加内容之后,我们将获得更好的跟踪和更少的漂移.

Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.

接下来,我们需要为新添加的ARAnchor出售SCNNode.例如

Next, we need to vend our SCNNode for the newly added ARAnchor. For example

func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
    if anchor is ARPlaneAnchor {
        let anchorNode = SCNNode()
        anchorNode.name = "anchor"
        return anchorNode
    } else {
        let plane = SCNPlane(width: 0.67, height: 1.0)
        plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
        let planeNode = SCNNode(geometry: plane)
        planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
        let node = SCNNode()
        node.addChildNode(planeNode)
        return node
    }
}

在这里,我们首先检查锚点是否为ARPlaneAnchor.如果是这样,我们将出售一个空节点以进行调试.如果不是,则是命中测试结果中添加的锚.因此,我们为最近的拍子创建了几何图形和节点.因为它是一个垂直平面并且我们的内容平放,所以需要绕x轴旋转它.因此,我们将其调整为eulerAngles以使其直立.如果我们要直接返回planeNode,则将删除对eulerAngles的调整,因此我们将其添加为空节点的子节点并返回.

Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.

应该导致类似以下的情况.

Should result in something like the following.

这篇关于将3D对象与estametedVerticalPlane检测到的垂直平面平行对齐的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆