与ARCore和ARKit相比,Vuforia是否有任何限制? [英] Are there any limitations in Vuforia compared to ARCore and ARKit?

查看:706
本文介绍了与ARCore和ARKit相比,Vuforia是否有任何限制?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是增强现实领域的初学者,致力于使用智能手机创建建筑物平面图(楼层平面图,房间平面图等,具有精确的测量)的应用程序.因此,我正在研究可以用于此的最佳AR SDK.很少有文章将Vuforia与ARCore和ARKit相提并论.

请针对每个优点和缺点建议最佳的SDK.

解决方案

更新时间:2020年7月4日.

TL; DR

Google ARCore 允许您构建适用于Android和iOS的应用程序,而 Apple ARKit 则可以构建适用于iOS和iPadOS的应用程序,以及旧版本的 旨在为Android,iOS和UWP创建应用.

如果Vuforia所运行的硬件支持它,则Vuforia的一个关键特性就是使用ARCore/ARKit技术,否则,它将使用自己的AR技术和引擎,称为software solution without dependant hardware .

但是,在为Android OEM智能手机进行开发时,您可能会遇到一个不愉快的问题:来自不同制造商的设备需要对传感器进行校准才能观察到相同的AR体验.幸运的是,Apple小工具没有这种缺点,因为在那里使用的所有传感器都在相同的条件下进行了校准.

但是要回答这个问题,让我们首先考虑一下.


Google ARCore 1.18

ARCore基于三个主要的基本概念: Environmental Understanding Light Estimation .因此ARCore允许支持的移动设备跟踪其相对于移动设备的位置和方向.使用称为并行里程表和制图的特殊技术在6自由度(6DOF)中创建世界. COM还可以帮助我们检测水平,垂直和倾斜的跟踪表面(例如地面,桌子,长凳,墙壁,斜坡等)的大小和位置.由于来自摄像机的以60 fps 的光学数据,以及来自陀螺仪和加速度计以1000 fps 的惯性数据,运动跟踪功能强大地工作.自然,ARKit和Vuforia的运行方式几乎相同.

当您在真实环境中移动手机时,ARCore会跟踪周围的空间,以了解智能手机相对于世界坐标的位置.在跟踪阶段,ARCore" 播种 "所谓的 feature points ,它形成一个稀疏的点云,并且跟踪会话处于活动状态时云仍然存在.这些特征点可通过RGB相机看到,而ARCore使用它们来计算手机的位置变化.然后,必须将视觉数据与来自加速度计和陀螺仪(惯性测量单元)的测量值相结合,以估算细丝.

尽管有上述规定,但此刻Sceneform存储库已被归档,不再由Google积极维护.最新发布的版本是Sceneform 1.17.0.

ARCore对环境的了解使您能够以与现实世界相集成的方式放置3D对象和2D注释.例如,您可以使用 ArAnchor 将虚拟的咖啡放在现实桌的角落.

ARCore还可以定义真实环境的照明参数,并为您提供给定摄像机图像的平均强度和颜色校正.这些数据使您可以在与周围环境相同的条件下照亮虚拟场景,从而大大提高了真实感.

ARCore中以前的主要更新,例如深度API Lighting EstimationEnvironmental HDR modeAugmented FacesAugmented ImagesSceneform AnimationsCloud AnchorsMultiplayer support.与Xcode中的ARKit相比,Android Studio中的ARCore的主要优势是 Android Emulator 允许您使用虚拟设备运行和调试AR应用.

ARCore早于ARKit.您还记得2014年发布的Project Tango吗?粗略地说,ARCore只是重写的Tango SDK,没有深度摄像头的支持.但是,明智地收购FlyBy和MetaIO有助于苹果追赶.我想这对增强现实行业非常有利.

ARCore的最新版本需要Android 7.0 Nougat或更高版本,支持OpenGL ES 3.1加速,并与Unity,Unreal和Web应用程序集成.目前,在Android平台上用于AR体验的最强大,最节能的芯片组是 Kirin 980 (7纳米), Snapdragon 865 (7纳米)和 Exynos 990 strong>(7纳米).

ARCore价格:免费.

|------------------------------|------------------------------|
|        "ARCore PROs"         |        "ARCore CONs"         | 
|------------------------------|------------------------------|
| Quick Plane Detection        | Cloud Anchors hosted online  |
|------------------------------|------------------------------|
| Long-distance-accuracy       | Lack of rendering engines    |
|------------------------------|------------------------------|
| ARCore Emulator in AS        | Poor developer documentation | 
|------------------------------|------------------------------|
| High-quality Lighting API    | No external camera support   |
|------------------------------|------------------------------|
| A lot of supported devices   | Poor Google Glass API        |
|------------------------------|------------------------------|
| ToF and Depth API support    | No Body Tracking support     |
|------------------------------|------------------------------|

这是用Kotlin编写的ARCore代码片段:

private fun addNodeToScene(fragment: ArFragment, anchor: Anchor, renderable: Renderable) {

    val anchorNode = AnchorNode(anchor)
    anchorNode.setParent(fragment.arSceneView.scene)

    val modelNode = TransformableNode(fragment.transformationSystem)
    modelNode.setParent(anchorNode)
    modelNode.setRenderable(renderable)
    modelNode.localPosition = Vector3(0.0f, 0.0f, -3.0f)
    fragment.arSceneView.scene.addChild(anchorNode)

    modelNode.select()
}



Apple ARKit 4.0

ARKit于2017年6月发布,仅仅两年后,它就变得非常流行.与其竞争对手一样,ARKit还使用称为视觉惯性里程表的特殊技术来非常准确地跟踪设备周围的世界. VIO与ARCore中的COM非常相似. ARKit中也有三个类似的基本概念: World Tracking Scene Understanding (包括四个阶段:平面检测命中测试/铸造 持久 多用户AR体验 ,使您可以回到使用相同选择的3D填充的相同环境在您的应用程序停用之前的内容.同时支持frontback摄像机捕获以及对collaborative sessions的支持(使我们可以共享世界地图)也很棒.

对游戏玩家来说是个好消息:感谢 USDZ 由Pixar开发和支持的文件格式,对于具有大量PBR着色器和动画的复杂3D模型而言,这是一个不错的选择.您也可以将遵循3D格式用于ARKit

ARKit不仅可以帮助您在6 DOF中跟踪设备相对于世界的位置和方向,还可以帮助您执行 ARAnchor 类及其所有子类,它们是相同的您在ARCore中使用它的方式.

要特别注意RealityKit的卫星– Reality Composer 应用程序,该应用程序现在已成为Xcode的一部分.这个全新的应用程序可帮助您为AR建立3D场景. Reality Composer中内置的场景可以包含动力学,简单的动画和PBR材料. Reality Composer可以作为独立应用程序安装在iOS和iPadOS上.

要创建基于最新版本的ARKit 4.0(包括全新的LiDAR扫描仪支持)构建的AR应用程序,您需要macOS 10.16 Big Sur,Xcode 12和运行iOS 14或iPadOS 14的设备.一个不幸的消息是–所有ARKit 4.0的主要功能仅限于由Apple A12和A13处理器提供支持的设备.同样,ARKit 4.0是将Metal框架与GPU加速结合使用的不二之选.不要忘记,ARKit与Unity和Unreal紧密集成.目前,用于AR体验的最强大,最节能的芯片组是 A13 Bionic (7nm)和 A12 Bionic (7nm).

ARKit价格:免费.

|------------------------------|------------------------------|
|         "ARKit PROs"         |         "ARKit CONs"         | 
|------------------------------|------------------------------|
| Stable 6 DoF World Tracking  | No auto-update for Anchors   |
|------------------------------|------------------------------|
| Collaborative Sessions       | ARKit 4.0 / 3.5 Restrictions |
|------------------------------|------------------------------|
| WorldMaps, iBeacon-awareness | No ARKit Simulator in Xcode  |
|------------------------------|------------------------------|
| 4 rendering technologies     | No external camera support   |
|------------------------------|------------------------------|
| Rich developer documentation | Quickly drains your battery  |
|------------------------------|------------------------------|
| LiDAR and Depth API support  | No AR glasses support        |
|------------------------------|------------------------------|

这是用Swift编写的ARKit代码片段:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

    guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
    let planeNode = tableTop(planeAnchor)
    node.addChildNode(planeNode)
}

func tableTop(_ anchor: ARPlaneAnchor) -> SCNNode {

    let x = CGFloat(anchor.extent.x)
    let z = CGFloat(anchor.extent.z)

    let tableNode = SCNNode()
    tableNode.geometry = SCNPlane(width: x, height: z)
    tableNode.position = SCNVector3(anchor.center.x, 0, anchor.center.z)
    return tableNode
} 



Apple RealityKit 2.0

您应该仔细查看WWDC 2019中引入的 RealityKit .从那以后大肆宣传. RealityKit允许您为iOS/iPadOS创建AR体验,为手机和macOS创建VR体验.此高级框架适用于.usdz资产以及.rcproject.reality文件格式,您可以从独立的macOS或iOS应用程序– 现实作曲家(RC). Cupertino的软件工程师从头开始为增强现实应用程序构建了RealityKit,您无需重复代码即可创建它们.它可以从零开始与Swift一起使用-没有Objective-C的遗产.而且,当然,RealityKit不仅在SwiftUI和UIKit上也很出色,而且在Metal上也很出色.

RealityKit框架具有RealityKit场景所基于的几个基本块:父类实体,类 组件.最好说ModelEntity是基于 MeshResource VideoMaterial .

RealityKit 框架为您提供了丰富的可与AR和VR一起使用的构建块:新的Swift声明式语法,3D原语(目前boxplanespheretext),具有纹理的PBR材料,遮挡材料和视频材料,具有逼真的射线跟踪阴影的照明设备(directionalspotpoint),空间音频处理,不同的锚点类型(bodycamerafaceimageobjecthorizontal planevertical planeraycastResultARAnchorworld),简化了协作会议的设置,强大的动画和物理设置,必不可少的AI和ML内置功能以及许多其他有用的东西.

Reality Composer 应用程序为您提供了一个简单直观的UI,用于为增强现实体验构建3D场景.它具有可下载3D资源的免版税库,可让您使用动画,音频和动态构造复杂的3D场景,其中包含对这些对象的行为的详尽描述.您还可以将合成内容导出为轻量级的 AR快速查看体验,用户放置和预览内容.在Reality Composer应用中,您可以使用以下五种锚点类型之一来启动项目:horizontalverticalimagefaceobject –与所需的跟踪类型相对应.

RealityKit和Reality Composer价格:免费.

|------------------------------|------------------------------|
|       "RealityKit PROs"      |      "RealityKit CONs"       | 
|------------------------------|------------------------------|
| Can create AR apps w/o ARKit | Intensive usage of CPU/GPU   |
|------------------------------|------------------------------|
| Very little boilerplate code | iOS 13+, macOS 10.15+ only   |
|------------------------------|------------------------------|
| Suitable for AR/VR projects  | Start lagging on old devices |
|------------------------------|------------------------------|
| Robust API for RC scenes     | Limited shaders capabilities |
|------------------------------|------------------------------|
| Asynchronous asset loading   | Lack of Apple documentation  |
|------------------------------|------------------------------|
| Autoupdating tracking target | No AR glasses support        |
|------------------------------|------------------------------|

这是用Swift编写的RealityKit代码片段:

override func viewDidLoad() {
    super.viewDidLoad()
            
    let textAnchor = try! SomeText.loadTextScene()
    let textEntity: Entity = textAnchor.realityComposer!.children[0]
    var textMC: ModelComponent = textEntity.children[0].components[ModelComponent]!
            
    var material = SimpleMaterial()
    material.baseColor = .color(.yellow)
    textMC.materials[0] = material    
    textMC.mesh = .generateText("Hello, RealityKit")

    textAnchor.realityComposer!.children[0].children[0].components.set(textMC)
    arView.scene.anchors.append(textAnchor)
}

Apple AR生态系统的另一个重要组成部分是Reality Converter应用程序.现在,您可以使用 Reality Converter ,而不是使用命令行转换工具. .全新的应用程序使您可以轻松地在Mac上转换,查看和自定义.usdz 3D对象.只需拖放常见的3D文件格式,例如.obj.gltf.usd,即可查看转换后的.usdz结果,使用自己的纹理自定义材质属性并编辑文件元数据.您甚至可以使用内置的基于图像的照明(IBL)选项在各种照明和环境条件下预览.usdz对象.



PTC Vuforia 9.0

2015年10月,PTC以6500万美元的价格从高通手中收购了Vuforia.考虑到高通公司于2010年推出了Vuforia.因此,Vuforia是AR家庭的姐姐.伙计们,大姐姐在看着你! ;)

2016年11月,Unity Technologies与PTC宣布了一项战略合作,以简化AR开发.从那时起,他们开始将Vuforia AR平台的新功能集成到Unity游戏引擎中. Vuforia可以与Unity,MS Visual Studio,Apple Xcode和Android Studio等开发环境一起使用.它支持各种智能手机,平板电脑和AR智能眼镜,例如HoloLens,Magic Leap,Vuzix M400和ODG R7.

Vuforia Engine具有与最新版本的ARKit中几乎相同的主要功能,但它也具有自己的功能,例如带有深度学习的模型目标,用于无标记AR体验的VISLAM和对iOS的外部相机支持,以及新功能.用于ARCore和ARKit的实验性API,并支持业界最新的AR眼镜.与ARKit和ARCore相比,Vuforia的主要优势在于它具有支持的设备列表,它支持通用Windows平台的开发适用于基于Intel的Windows 10设备的应用程序,包括Microsoft Surface和HoloLens.

Vuforia具有独立版本和直接移植到Unity中的版本.它具有以下功能:

  • 高级模型目标360 (由AI提供支持)
  • 具有深度学习的模型目标(允许使用预先存在的3D模型和深度学习算法通过形状即时识别对象)
  • 图像目标(将AR内容放置在平面对象上的最简单方法)
  • 多个目标(适用于具有平坦表面和多个侧面的对象)
  • 圆柱目标(用于将AR内容放置在圆柱形状的物体(例如瓶子)上)
  • 地平面(作为Smart Terrain的一部分,此功能使数字内容可以放置在地板和桌面上)
  • VuMarks (允许标识内容并将其添加到一系列对象中),
  • 对象目标(用于扫描对象)
  • 静态和自适应模式(用于固定和移动对象)
  • 模拟播放模式(允许开发人员遍历"或绕过3D模型,并从其计算机上查看最终的AR体验)
  • ,当然还有 Vuforia Fusion Vuforia发动机区域目标.

Vuforia Fusion 是一项旨在解决AR启用技术(如相机,传感器,芯片组和ARKit等软件框架)中的碎片化问题的功能.借助Vuforia Fusion,您的应用程序将自动提供最佳体验,而无需您进行额外的工作.

Vuforia引擎区域目标使开发人员可以将整个空间(无论是工厂车间还是零售商店)用作增强现实目标.使用第一个受支持的设备Matterport Pro2相机,开发人员可以创建所需位置的详细3D扫描.建议将其放置在室内,且大多数为静态,且不超过1,000平方米(约10,000平方英尺).扫描生成3D模型后,可以使用Vuforia区域目标生成器将其转换为区域目标.然后可以将这个目标带入Unity,在其中将内容放置在空间的数字表示中.

Vuforia API允许使用StaticAdaptive模式.当现实模型像大型工业机器一样保持静止时,实现Static API将使用显着更少的处理能力.这为这些模型提供了更长久的性能体验.对于不固定的对象,Adaptive API可以提供持续的稳健体验.

External Camera 功能是Vuforia Engine驱动程序框架的一部分.外置摄像头为增强现实带来了新的视角.它使Vuforia Engine可以访问手机和平板电脑配备的摄像头以外的外部视频源.通过使用独立的摄像头,开发人员可以创建AR体验,从而提供玩具,机器人或工业工具的第一人称视角.

Occlusion Management 是构建逼真的AR体验的关键功能之一.使用遮挡管理时,Vuforia Engine会检测并跟踪目标,即使目标被部分隐藏在日常障碍物(如手)中也是如此.特殊的遮挡处理使应用程序可以显示图形,就像它们出现在物理对象中一样.

Vuforia支持iOS设备的Metal加速.您也可以将Vuforia Samples用于您的项目.例如:Vuforia Core Samples库包含使用Vuforia功能的各种场景,包括预配置的对象识别场景,您可以将其用作对象识别应用程序的参考和起点.

这是用C#编写的AR Foundation代码片段:

private void UpdatePlacementPose() {

    var screenCenter = Camera.main.ViewportToScreenPoint(new Vector3(0.5f, 0.5f));
    var hits = new List<ARRaycastHit>();
    arOrigin.Raycast(screenCenter, hits, TrackableType.Planes);

    placementPoseIsValid = hits.Count > 0;

    if (placementPoseIsValid) {

        placementPose = hits[0].pose;

        var cameraForward = Camera.current.transform.forward;
        var cameraBearing = new Vector3(cameraForward.x, 0, 
                                        cameraForward.z).normalized;

        placementPose.rotation = Quaternion.LookRotation(cameraBearing);
    }
}

Vuforia SDK定价选项:

  • 免费许可证 –您只需要注册免费的开发许可证键

  • 基本许可证(每月$ 42,按年计费)–适用于学生

  • 基本+云许可( $ 99/月)–适用于小型企业

  • 专业许可(个人价格)–适用于所有公司类型

这是优点和缺点.

|------------------------------|------------------------------|
|       "Vuforia PROs"         |        "Vuforia CONs"        | 
|------------------------------|------------------------------|
| Supports Android, iOS, UWP   | The price is not reasonable  |
|------------------------------|------------------------------|
| A lot of supported devices   | Poor developer documentation |
|------------------------------|------------------------------|
| External Camera support      | SDK has some issues and bugs |
|------------------------------|------------------------------|
| Webcam/Simulator Play Mode   | Doesn't support Geo tracking |
|------------------------------|------------------------------|



结论:

与ARCore 1.18和ARKit 4.0相比,使用PTC Vuforia 9.0开发时没有明显的限制. Vuforia是一款出色的老产品,它支持更多未得到官方支持的Apple和Android设备,当然,它还支持几种最新型号的AR眼镜.

但是在我看来,带有 Reality Family 工具箱(RealityKit,Reality Composer和Reality Converter)的ARKit 4.0具有许多有用的最新功能,而Vuforia 9.0和ARCore 1.18只是这些附加功能部分拥有.与ARCore兼容设备相比,ARKit 4.0在房间内或街道上无需校准即可亲自获得更高的近距离测量精度.这是通过使用Apple LiDAR扫描仪(ARCore 1.18使用ToF相机和Depth API,Vuforia 9.0使用Occlusion Management功能)来实现的,该扫描仪允许您使用OcclusionMaterial创建高质量的虚拟网格物体,用于位于以下位置的真实表面现场了解阶段.现在,ARKit 4.0可以检测非平面的表面以及完全没有特征的表面,例如无纹理的白墙或光线不足的表面.

此外,如果您实现iBeacon工具,WorldMaps和对GPS的支持,它将帮助您消除随着时间的推移而积累的任何跟踪错误.而且,ARKit与Vision和CoreML框架的紧密集成为强大的AR工具集做出了巨大贡献.与Apple Maps集成后,ARKit 4.0可以将GPS Location Anchors放置在室外,同时具有最高的精度.

因此,Vuforia的测量精度在很大程度上取决于您要开发的平台.但是要特别注意一个事实,即甚至流行的 Vuforia Chalk 应用程序都是在Apple ARKit上构建的.这是有道理的.


I am a beginner in the field of augmented reality, working on applications that create plans of buildings (floor plan, room plan, etc with accurate measurements) using a smartphone. So I am researching about the best AR SDK which can be used for this. There are not many articles pitting Vuforia against ARCore and ARKit.

Please suggest the best SDK to use, pros and cons of each.

解决方案

Updated: July 04, 2020.

TL;DR

Google ARCore allows you build apps for Android and iOS, with Apple ARKit you can build apps for iOS and iPadOS, and great old PTC Vuforia was designed to create apps for Android, iOS and UWP.

A crucial peculiarity of Vuforia is a usage of ARCore/ARKit technology if the hardware it is running on supports it, otherwise it uses its own AR technology and engine, known as software solution without dependant hardware.

However, when developing for Android OEM smartphones, you may encounter with an unpleasant problem: devices from different manufacturers need a sensors’ calibration in order to observe the same AR experience. Luckily, Apple gadgets have no such drawback because all sensors used there were calibrated under identical conditions. 



But to answer this question, let’s put first things first.


Google ARCore 1.18

ARCore is based on the three main fundamental concepts : Motion Tracking, Environmental Understanding and Light Estimation. Thus ARCore allows a supported mobile device to track its position and orientation relative to the world in 6 degrees of freedom (6DOF) using special technique called Concurrent Odometry and Mapping. COM also helps us detect the size and location of horizontal, vertical and angled tracked surfaces (like ground, tables, benches, walls, slopes, etc). Motion Tracking works robustly thanks to optical data coming from a camera at 60 fps, combined with inertial data coming from gyroscope and accelerometer at 1000 fps. Naturally, ARKit and Vuforia operate almost the same way.



When you move your phone through the real environment, ARCore tracks a surrounding space to understand where a smartphone is, relative to the world coordinates. At tracking stage ARCore "sows" so called feature points which form a sparse point cloud and this cloud lives while tracking session is active. These feature points are visible through RGB camera, and ARCore uses them to compute phone's change in location. The visual data then must be combined with measurements coming from accelerometer and gyroscope (Inertial Measurement Unit) to estimate the position and orientation of the ArCamera over time. ARCore looks for clusters of feature points that appear to lie on horizontal, vertical or angled surfaces and makes these surfaces available to your app as planes (we call this technique as plane detection). So, now you can use these planes to place 3D objects in your scene. After this, virtual geometry with assigned shaders will be rendered by ARCore's companion – Sceneform, supporting OBJ, FBX and glTF assets and using a real-time Physically Based Rendering (a.k.a. PBR) engine – Filament.

Notwithstanding the above, at this moment Sceneform repository has been archived and it no longer actively maintaining by Google. The last released version was Sceneform 1.17.0.


ARCore's environmental understanding lets you place 3D objects and 2D annotations in a way that integrates with the real world. For example, you can place a virtual cup of coffee on the corner of your real-world table using ArAnchor.


ARCore can also define lighting parameters of a real environment and provide you with the average intensity and color correction of a given camera image. This data lets you light your virtual scene under the same conditions as the environment around you, considerably increasing the sense of realism.



Previous major updates brought in ARCore such the significant application programming interfaces as Depth API, Lighting Estimation with Environmental HDR mode, Augmented Faces, Augmented Images, Sceneform Animations, Cloud Anchors and Multiplayer support. The main advantage of ARCore in Android Studio over ARKit in Xcode is Android Emulator allowing you run and debug AR apps using virtual device.



ARCore is older than ARKit. Do you remember Project Tango released in 2014? Roughly speaking, ARCore is just a rewritten Tango SDK without a depth camera support. But a wise acquisition of FlyBy and MetaIO helped Apple to catch up. I suppose it is extremely good for AR industry.

The latest version of ARCore requires Android 7.0 Nougat or later, supports OpenGL ES 3.1 acceleration, and integrates with Unity, Unreal, and Web applications. At the moment the most powerful and energy efficient chipsets for AR experience on Android platform are Kirin 980 (7nm), Snapdragon 865 (7nm) and Exynos 990 (7nm).

ARCore price: FREE.

|------------------------------|------------------------------|
|        "ARCore PROs"         |        "ARCore CONs"         | 
|------------------------------|------------------------------|
| Quick Plane Detection        | Cloud Anchors hosted online  |
|------------------------------|------------------------------|
| Long-distance-accuracy       | Lack of rendering engines    |
|------------------------------|------------------------------|
| ARCore Emulator in AS        | Poor developer documentation | 
|------------------------------|------------------------------|
| High-quality Lighting API    | No external camera support   |
|------------------------------|------------------------------|
| A lot of supported devices   | Poor Google Glass API        |
|------------------------------|------------------------------|
| ToF and Depth API support    | No Body Tracking support     |
|------------------------------|------------------------------|

Here's ARCore code's snippet written in Kotlin:

private fun addNodeToScene(fragment: ArFragment, anchor: Anchor, renderable: Renderable) {

    val anchorNode = AnchorNode(anchor)
    anchorNode.setParent(fragment.arSceneView.scene)

    val modelNode = TransformableNode(fragment.transformationSystem)
    modelNode.setParent(anchorNode)
    modelNode.setRenderable(renderable)
    modelNode.localPosition = Vector3(0.0f, 0.0f, -3.0f)
    fragment.arSceneView.scene.addChild(anchorNode)

    modelNode.select()
}



Apple ARKit 4.0

ARKit was released in June 2017 and just two years later it became very popular. Like its competitors, ARKit also uses special technique, called Visual Inertial Odometry, to very accurately track the world around your device. VIO is quite similar to COM found in ARCore. There are also three similar fundamental concepts in ARKit: World Tracking, Scene Understanding (which includes four stages: Plane Detection, Hit-Testing / Ray-Casting, Scene Reconstruction, Light Estimation), and Rendering with a great help of ARKit companions – SceneKit framework, that’s actually an Apple 3D game engine since 2012, RealityKit framework specially made for AR and written in Swift from scratch (released in 2019), and SpriteKit framework with its 2D engine (since 2013).

VIO fuses RGB sensor data at 60 fps with Core-Motion data (IMU) at 1000 fps. In addition to that, SceneKit, for example, can render all the 3D geometry at 30/60/120 fps. So, under such circumstances, I think it should be noted that due to a very high energy impact (because of an enormous burden on CPU and GPU), your iPhone's battery will be drained pretty quickly.

ARKit has a handful of useful methods for robust tracking and accurate measurements. Among its arsenal you can find easy-to-use functionality for saving and retrieving ARWorldMaps. World map is an indispensable "portal" for Persistent and Multiuser AR experience that allows you to come back to the same environment filled with the same chosen 3D content just before the moment your app became inactive. Support for simultaneous front and back camera capture and support for collaborative sessions that allows us to share World Maps are also great.

There are good news for gamers: up to 6 people are simultaneously able to play the same AR game, thanks to MultipeerConnectivity framework. For 3D geometry you could use a brand-new USDZ file format, developed and supported by Pixar, that is good choice for sophisticated 3D models with lots of PBR shaders and animations. Also you can use the following 3D formats for ARKit.

ARKit not only can help you track a position and orientation of your device relative to the world in 6 DOF, but also help you perform People Occlusion technique (based on alpha and depth channels' segmentation), Body Motion Capture tracking, 2D tracking, Vertical and Horizontal Planes detection, Image detection, 3D Object detection and 3D Object scanning. With People Occlusion tool your AR content realistically passes behind and in front of people in the real world, making AR experiences even more immersive. Also, Realistic reflections, that use machine learning algorithms and Face-based AR experience that allows to track up to 3 faces at a time are available for you now.



Using iBeacons along with ARKit, you assist an iBeacon-aware application to know what room it’s in, and show the correct 3D/2D content chosen for that room. Working with ARKit you should intensively exploit ARAnchor class and all its subclasses, the same way you’ve been using it in ARCore.


Pay particular attention to RealityKit's satellite – Reality Composer application that's now a part of Xcode. This brand-new app helps you build 3D scenes for AR. Scenes built in Reality Composer can be packed with dynamics, simple animations and PBR materials. Reality Composer can be installed on iOS and iPadOS as a standalone app.

For creating AR apps built on the latest versions of ARKit 4.0, including brand-new LiDAR scanner support, you need macOS 10.16 Big Sur, Xcode 12 and device running iOS 14 or iPadOS 14. A sad news is – all ARKit 4.0 top features are restricted to devices powered by Apple A12 and A13 processors. Also ARKit 4.0 is a worthy candidate to marry Metal framework for GPU acceleration. Don’t forget that ARKit tightly integrates with Unity and Unreal. At the moment the most powerful and energy efficient chipsets for AR experience are A13 Bionic (7nm) and A12 Bionic (7nm).

ARKit price: FREE.

|------------------------------|------------------------------|
|         "ARKit PROs"         |         "ARKit CONs"         | 
|------------------------------|------------------------------|
| Stable 6 DoF World Tracking  | No auto-update for Anchors   |
|------------------------------|------------------------------|
| Collaborative Sessions       | ARKit 4.0 / 3.5 Restrictions |
|------------------------------|------------------------------|
| WorldMaps, iBeacon-awareness | No ARKit Simulator in Xcode  |
|------------------------------|------------------------------|
| 4 rendering technologies     | No external camera support   |
|------------------------------|------------------------------|
| Rich developer documentation | Quickly drains your battery  |
|------------------------------|------------------------------|
| LiDAR and Depth API support  | No AR glasses support        |
|------------------------------|------------------------------|

Here's ARKit code's snippet written in Swift:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

    guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
    let planeNode = tableTop(planeAnchor)
    node.addChildNode(planeNode)
}

func tableTop(_ anchor: ARPlaneAnchor) -> SCNNode {

    let x = CGFloat(anchor.extent.x)
    let z = CGFloat(anchor.extent.z)

    let tableNode = SCNNode()
    tableNode.geometry = SCNPlane(width: x, height: z)
    tableNode.position = SCNVector3(anchor.center.x, 0, anchor.center.z)
    return tableNode
} 



Apple RealityKit 2.0

You should carefully look at RealityKit that was introduced in WWDC 2019. There’s been a lot of hype around it since then. RealityKit allows you create AR experiences for iOS/iPadOS and VR experiences for mobiles and macOS. This high-level framework works with .usdz assets as well as with .rcproject and .reality file formats which you can import from standalone macOS or iOS app – Reality Composer (RC). Cupertino software engineers built RealityKit from the ground for augmented reality apps that you can create with no repetitive code. It works with Swift from scratch – there’s no Objective-C legacy. And, of course, RealityKit shines not only with SwiftUI and UIKit but with Metal too.

RealityKit framework has several fundamental blocks on which RealityKit's scenes are based: a parent class Entity, a class AnchorEntity that automatically tracks target (unlike in ARKit), classes BodyTrackedEntity, ModelEntity, AmbientLight, SpotLight, DirectionalLight and PerspectiveCamera. These entities are just like SceneKit's nodes but slightly different in hierarchical structure. And, of course, most entities have Components. It would be useful to say that ModelEntity is built on MeshResource and Materials and now there's a support for VideoMaterial in RealityKit 2.0.

RealityKit framework gives you a rich set of building blocks to work with AR and VR: new declarative Swift syntax, 3D primitives (at the moment box, plane, sphere and text), PBR materials with textures, occlusion material and video material, lighting fixtures (directional, spot and point) with realistic ray-traced shadows, spacial audio processing, different anchors types (body, camera, face, image, object, horizontal plane, vertical plane, raycastResult, ARAnchor and world), simplified setup for collaborative sessions, robust animations' and physics' setup, indispensable AI and ML built-in features and many other useful things.

Reality Composer application gives you a simple and intuitive UI for constructing 3D scenes for Augmented Reality experiences. It has a royalty free library with downloadable 3D assets that allow you construct sophisticated 3D scenes with animation, audio, and dynamics which contain a thorough description of how these objects behave. You can also export your composition as a lightweight AR Quick Look experience that lets users place and preview a content. In Reality Composer app you can start your project using one of five anchor types: horizontal, vertical, image, face and object – which corresponds to desired type of tracking.

RealityKit and Reality Composer price: FREE.

|------------------------------|------------------------------|
|       "RealityKit PROs"      |      "RealityKit CONs"       | 
|------------------------------|------------------------------|
| Can create AR apps w/o ARKit | Intensive usage of CPU/GPU   |
|------------------------------|------------------------------|
| Very little boilerplate code | iOS 13+, macOS 10.15+ only   |
|------------------------------|------------------------------|
| Suitable for AR/VR projects  | Start lagging on old devices |
|------------------------------|------------------------------|
| Robust API for RC scenes     | Limited shaders capabilities |
|------------------------------|------------------------------|
| Asynchronous asset loading   | Lack of Apple documentation  |
|------------------------------|------------------------------|
| Autoupdating tracking target | No AR glasses support        |
|------------------------------|------------------------------|

Here's RealityKit code's snippet written in Swift:

override func viewDidLoad() {
    super.viewDidLoad()
            
    let textAnchor = try! SomeText.loadTextScene()
    let textEntity: Entity = textAnchor.realityComposer!.children[0]
    var textMC: ModelComponent = textEntity.children[0].components[ModelComponent]!
            
    var material = SimpleMaterial()
    material.baseColor = .color(.yellow)
    textMC.materials[0] = material    
    textMC.mesh = .generateText("Hello, RealityKit")

    textAnchor.realityComposer!.children[0].children[0].components.set(textMC)
    arView.scene.anchors.append(textAnchor)
}

One more important part of Apple's AR ecosystem is Reality Converter app. Now, instead of using a command line conversion tool, you can use a Reality Converter. The brand-new app makes it easy for you to convert, view and customize .usdz 3D objects on Mac. Simply drag-and-drop common 3D file formats, such as .obj, .gltf or .usd, to view the converted .usdz result, customize material properties with your own textures and edit file metadata. You can even preview your .usdz object under a variety of lighting and environment conditions with built-in Image-Based Lighting (IBL) options.



PTC Vuforia 9.0

In October 2015 PTC acquired Vuforia from Qualcomm for $65 million. Take into consideration that Qualcomm launched Vuforia in 2010. So Vuforia is an older sister in AR family. Big sister is watching you, guys! ;)

In November 2016 Unity Technologies and PTC announced a strategic collaboration to simplify AR development. Since then they work together integrating new features of the Vuforia AR platform into the Unity game engine. Vuforia can be used with such development environments as Unity, MS Visual Studio, Apple Xcode and Android Studio. It supports a wide range of smartphones, tablets and AR smart glasses, such as HoloLens, Magic Leap, Vuzix M400, and ODG R7.

Vuforia Engine boasts approximately the same main capabilities that you can find in the latest versions of ARKit but also it has its own features, such as Model Targets with Deep Learning, VISLAM for markerless AR experience and External Camera support for iOS, new experimental APIs for ARCore and ARKit and support for industry latest AR glasses. The main advantage of Vuforia over ARKit and ARCore that it has a wider list of supported devices and it supports the development of Universal Windows Platform apps for Intel-based Windows 10 devices, including Microsoft Surface and HoloLens.

Vuforia has a standalone version and a version baked directly into Unity. It has the following functionality:

  • Advanced Model Targets 360 (recognition powered by AI),
  • Model Targets with Deep Learning (allow to instantly recognize objects by shape using pre-existing 3D models and deep learning algorithms),
  • Image Targets (the easiest way to put AR content on flat objects),
  • Multi Targets (for objects with flat surfaces and multiple sides),
  • Cylinder Targets (for placing AR content on objects with cylindrical shapes, like bottles),
  • Ground Plane (as a part of Smart Terrain, this feature enables digital content to be placed on floors and tabletop surfaces),
  • VuMarks (allows identify and add content to series of objects),
  • Object Targets (for scanning an object),
  • Static and Adaptive Modes (for stationary and moving objects),
  • Simulation Play Mode (that allows developers to "walk through" or around the 3D model and see the final AR experience from their computer)
  • and, of course, Vuforia Fusion and Vuforia Engine Area Targets.

Vuforia Fusion is a capability designed to solve the problem of fragmentation in AR enabling technologies such as cameras, sensors, chipsets, and software frameworks like ARKit. With Vuforia Fusion, your application will automatically provide the best experience possible with no extra work required on your end.

Vuforia Engine Area Targets enable developers to use an entire space, be it a factory floor or retail store, as an augmented reality target. Using a first supported device, a Matterport Pro2 camera, developers can create a detailed 3D scan of a desired location. Locations are recommended to be indoors, mostly static, and no larger than 1,000 sqm (around 10,000 sqft). Once the scan produces a 3D model it can be converted into an Area Target with the Vuforia Area Target Generator. This target can then be brought into Unity where content can be placed within a digital representation of the space.

Vuforia API allows for a Static or Adaptive mode. When the real-world model remains stationary, like a large industrial machine, implementing the Static API will use significantly less processing power. This enables a longer lasting and higher performance experience for those models. For objects that won’t be stationary the Adaptive API allows for a continued robust experience.

The External Camera feature is a part of the Vuforia Engine Driver Framework. External Camera provides a new perspective on what’s possible with Augmented Reality. It allows Vuforia Engine to access external video sources beyond the camera equipped in phones and tablets. By using an independent camera, developers can create an AR experience that offers a first-person view from toys, robots or industrial tools.

Occlusion Management is one of the key features for building a realistic AR experience. When you're using Occlusion Management, Vuforia Engine detects and tracks targets, even when they’re partially hidden behind everyday barriers, like your hand. Special occlusion handling allows apps to display graphics as if they appear inside physical objects.

Vuforia supports Metal acceleration for iOS devices. Also you can use Vuforia Samples for your projects. For example: the Vuforia Core Samples library includes various scenes using Vuforia features, including a pre-configured Object Recognition scene that you can use as a reference and starting point for Object Recognition application.

Here's AR Foundation code's snippet written in C#:

private void UpdatePlacementPose() {

    var screenCenter = Camera.main.ViewportToScreenPoint(new Vector3(0.5f, 0.5f));
    var hits = new List<ARRaycastHit>();
    arOrigin.Raycast(screenCenter, hits, TrackableType.Planes);

    placementPoseIsValid = hits.Count > 0;

    if (placementPoseIsValid) {

        placementPose = hits[0].pose;

        var cameraForward = Camera.current.transform.forward;
        var cameraBearing = new Vector3(cameraForward.x, 0, 
                                        cameraForward.z).normalized;

        placementPose.rotation = Quaternion.LookRotation(cameraBearing);
    }
}

Vuforia SDK Pricing Options:

  • Free license – you just need to register for a free Development License Key

  • Basic license ($42/month, billed annually) – For Students

  • Basic + Cloud license ($99/month) – For Small Businesses

  • Pro license (personal price) – For All Companies Types

Here are Pros and Cons.

|------------------------------|------------------------------|
|       "Vuforia PROs"         |        "Vuforia CONs"        | 
|------------------------------|------------------------------|
| Supports Android, iOS, UWP   | The price is not reasonable  |
|------------------------------|------------------------------|
| A lot of supported devices   | Poor developer documentation |
|------------------------------|------------------------------|
| External Camera support      | SDK has some issues and bugs |
|------------------------------|------------------------------|
| Webcam/Simulator Play Mode   | Doesn't support Geo tracking |
|------------------------------|------------------------------|



CONCLUSION :

There are no significant limitations when developing with PTC Vuforia 9.0 compared to ARCore 1.18 and ARKit 4.0. Vuforia is an old great product and it supports a wider list of Apple and Android devices that are not officially supported and, of course, it supports several latest models of AR glasses.

But in my opinion, ARKit 4.0 with a Reality Family toolkit (RealityKit, Reality Composer and Reality Converter) have an extra bunch of useful up-to-date features that Vuforia 9.0 and ARCore 1.18 just partially have. ARKit 4.0 personally has a much greater short-distance measurement accuracy than ARCore compatible device has, within a room or on a street, without any need for calibration. This is achieved through the use of Apple LiDAR scanner (ARCore 1.18 uses ToF cameras and Depth API, and Vuforia 9.0 uses Occlusion Management feature) that allows you create a high-quality virtual mesh with OcclusionMaterial for real-world surfaces at scene understanding stage. ARKit 4.0 now detects nonplanar surfaces and surfaces with no-features-at-all, such as texture-free white walls or poorly-lit surfaces.

Also if you implement iBeacon tools, WorldMaps, and support for GPS – it will help you eliminate any tracking errors accumulated over time. And ARKit's tight integration with Vision and CoreML frameworks makes a huge contribution to a robust AR toolset. Integration with Apple Maps allows ARKit 4.0 to put GPS Location Anchors outdoors with a highest possible precision at the moment.

So, Vuforia's measurement accuracy greatly depends on what platform you're developing for. But pay particular attention to the fact that even popular Vuforia Chalk application was built on Apple ARKit. This makes sense.


这篇关于与ARCore和ARKit相比,Vuforia是否有任何限制?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆