ARKit –扫描3D对象并从中生成3D网格 [英] ARKit – Scanning 3D Object and generating 3D Mesh from it

查看:193
本文介绍了ARKit –扫描3D对象并从中生成3D网格的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

iOS 12 ARKit应用程序允许我们创建一个ARReferenceObject,并使用它可以可靠地识别实际对象的位置和方向.而且我们还可以保存完成的.arobject文件.

iOS 12 ARKit app allows us create an ARReferenceObject, and using it, we can reliably recognize a position and orientation of real-world object. And we can also save the finished .arobject file.

但是:

ARReferenceObject仅包含ARKit识别实际对象所需的空间特征信息,而不是该对象的可显示3D重建.

ARReferenceObject contains only the spatial features information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object.

sceneView.session.createReferenceObject(transform: simd_float4x4, 
                                           center: simd_float3, 
                                           extent: simd_float3) { 
   (ARReferenceObject?, Error?) in
        // code
}

func export(to url: URL, previewImage: UIImage?) throws { }

问题:是否有一种方法可以让我们使用 Poisson Surface Reconstruction .arobject文件重建数字3D几何图形(低多边形或高多边形),或者 Photogrammetry ?

Question: Is there a method that allows us to reconstruct digital 3D geometry (low-poly or high-poly) from .arobject file using Poisson Surface Reconstruction or Photogrammetry?

推荐答案

您用引号

ARReferenceObject仅包含ARKit识别实际对象所需的空间特征信息,而不是该对象的可显示3D重建.

An ARReferenceObject contains only the spatial feature information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object.

如果运行示例代码,则可以自己查看其可视化效果在扫描过程中以及经过测试识别后创建参考对象-只是一个稀疏的3D点云. Apple的API为您提供的功能肯定没有摄影测量法,并且在恢复网格中的真实结构方面没有太多事情要做.

If you run that sample code, you can see for yourself the visualizations it creates of the reference object during scanning and after a test recognition — it's just a sparse 3D point cloud. There's certainly no photogrammetry in what Apple's API provides you, and there'd not much to go on in terms of recovering realistic structure in a mesh.

并不是说这样的努力是不可能的-已经有一些第三方演示 此处基于ARKit的摄影测量实验.但是

That's not to say that such efforts are impossible — there have been some third parties demoing Here photogrammetry experiments based on top of ARKit. But

1..它未使用ARKit 2对象扫描,仅使用了ARFrame中的原始像素缓冲区和特征点.

1. that's not using ARKit 2 object scanning, just the raw pixel buffer and feature points from ARFrame.

2..这些演示中的推断水平需要 非平凡的原始R& D,因为它远远超出了信息的种类 ARKit本身提供.

2. the level of extrapolation in those demos would require non-trivial original R&D, as it's far beyond the kind of information ARKit itself supplies.

这篇关于ARKit –扫描3D对象并从中生成3D网格的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆