将 SIFT 用于增强现实 [英] Using SIFT for Augmented Reality

查看:45
本文介绍了将 SIFT 用于增强现实的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我遇到过很多 AR 库/SDK/API,它们都是基于标记的,直到我找到了 这个视频,从描述和评论来看,他似乎正在使用 SIFT 来检测对象并跟踪它.

I've come across MANY AR libraries/SDKs/APIs, all of them are marker-based, until I found this video, from the description and the comments, it looks like he's using SIFT to detect the object and follow it around.

我需要为 Android 做这件事,所以我需要在纯 Java 中完整实现 SIFT.

I need to do that for Android, so I'm gonna need a full implementation of SIFT in pure Java.

我愿意这样做,但我需要先了解 SIFT 如何用于增强现实.

I'm willing to do that but I need to know how SIFT is used for augmented reality first.

我可以利用你提供的任何信息.

I could make use of any information you give.

推荐答案

在我看来,试图实现 SIFT 对于便携式设备来说是疯狂的.SIFT是一种图像特征提取算法,其中包含复杂的数学运算,当然需要大量的计算能力.SIFT 也获得了专利.

In my opinion, trying to implement SIFT for a portable device is madness. SIFT is an image feature extraction algorithm, which includes complex math and certainly requires a lot of computing power. SIFT is also patented.

不过,如果你真的想继续这个任务,你应该先做一些研究.您需要检查以下内容:

Still, if you indeed want to go forth with this task, you should do quite some research at first. You need to check things like:

  • 任何可提高性能的 SIFT 变体,包括各种不同的算法
  • 我建议查看 SURF,它非常强大且速度更快(但仍然是一个那些可怕的算法)
  • Android NDK(我稍后会解释)
  • 大量的出版物
  • Any variants of SIFT that enhance performance, including different algorithms all around
  • I would recommend looking into SURF which is very robust and much more faster (but still one of those scary algorithms)
  • Android NDK (I'll explain later)
  • Lots and lots of publications

为什么选择 Android NDK?因为通过在 Java 应用程序正在使用的 C 库中实现算法,您可能会获得更显着的性能提升.

Why Android NDK? Because you'll probably have a much more significant performance gain by implementing the algorithm in a C library that's being used by your Java application.

在开始任何事情之前,请确保您进行了研究,因为中途意识到图像特征提取算法对于 Android 手机来说太多了,这将是一种遗憾.实现这样一种算法本身就是一项认真的努力,该算法可提供良好的结果并在可接受的时间内运行,更不用说使用它来创建 AR 应用程序了.

Before starting anything, make sure you do that research cause it would be a pity to realize halfway that the image feature extraction algorithms are just too much for an Android phone. It's a serious endeavor in itself implementing such an algorithm that provides good results and runs in an acceptable amount of time, let alone using it to create an AR application.

就像您将其用于 AR 的方式一样,我猜您在图像上运行算法所获得的描述符必须与保存在中央数据库中的数据相匹配.然后可以将结果显示给用户.从 SURF 收集的图像的特征应该描述它,然后可以使用这些特征来识别它.我在这方面并没有真正的经验,但网络上总是有资源.您可能想从通用内容开始,例如 对象识别.

As in how you would use that for AR, I guess that the descriptor you get from running the algorithm on an image would have to be matched against with data saved in a central database. Then the results can be displayed to the user. The features of an image gathered from SURF are supposed to describe it such as that it can be then identified using those. I'm not really experienced on doing that but there's always resources on the web. You'd probably wanna start with generic stuff such as Object Recognition.

祝你好运:)

这篇关于将 SIFT 用于增强现实的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆