利用sift的增强现实 [英] Using SIFT for Augmented Reality

查看:137
本文介绍了利用sift的增强现实的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我接触到很多AR库/软件开发工具包/ API的,个个都是基于标记的,直到我发现的这个视频,从描述和评论,它看起来像他用SIFT检测对象,并按照各地。

I've come across MANY AR libraries/SDKs/APIs, all of them are marker-based, until I found this video, from the description and the comments, it looks like he's using SIFT to detect the object and follow it around.

我需要做的是为Android,所以我会需要用纯Java全面实施分析筛选的。

I need to do that for Android, so I'm gonna need a full implementation of SIFT in pure Java.

我愿意这样做,但我需要知道如何SIFT用于第一增强现实。

I'm willing to do that but I need to know how SIFT is used for augmented reality first.

我可以利用你给的任何信息。

I could make use of any information you give.

推荐答案

在我看来,试图实现的 SIFT 用于便携式设备的疯狂。 SIFT是图像特征提取算法,其中包括复杂的数学,肯定需要大量的计算能力。 SIFT也申请了专利。

In my opinion, trying to implement SIFT for a portable device is madness. SIFT is an image feature extraction algorithm, which includes complex math and certainly requires a lot of computing power. SIFT is also patented.

不过,如果你真的想完成这个任务出去,你应该首先做的相当一些研究。您需要检查的东西,如:

Still, if you indeed want to go forth with this task, you should do quite some research at first. You need to check things like:

  • 分析筛选的增强性能的任何变体,包括不同的算法各地
  • 我会建议寻找到 SURF 这是非常强大和更更快(但还有一这些可怕的算法)
  • 的Andr​​oid NDK (我将在后面解释)
  • 很多很多出版物
  • Any variants of SIFT that enhance performance, including different algorithms all around
  • I would recommend looking into SURF which is very robust and much more faster (but still one of those scary algorithms)
  • Android NDK (I'll explain later)
  • Lots and lots of publications

为什么的Andr​​oid NDK?因为你可能不得不通过实施正在被使用的Java应用程序的算法中的C库更加显著的性能提升。

Why Android NDK? Because you'll probably have a much more significant performance gain by implementing the algorithm in a C library that's being used by your Java application.

在开始任何事情之前,请确保你做这项研究的原因这将是一个遗憾中途认识的图像特征提取算法只是太多的Andr​​oid手机。它本身执行这样的算法,它提供了良好的效果,并运行在一个可接受的时间量,更不用说使用它来创建一个AR应用一个严重的努力。

Before starting anything, make sure you do that research cause it would be a pity to realize halfway that the image feature extraction algorithms are just too much for an Android phone. It's a serious endeavor in itself implementing such an algorithm that provides good results and runs in an acceptable amount of time, let alone using it to create an AR application.

正如你将如何使用,对于AR,我猜你运行的图像算法得到的描述符都必须匹配与存储在中央数据库中的数据。然后可以将结果显示给用户。从SURF收集的图像的特征都应该描述它如,它可以然后使用这些标识。我没有真正经历做了,但总有资源在网络上。你可能想开始与通用的东西,如物体识别

As in how you would use that for AR, I guess that the descriptor you get from running the algorithm on an image would have to be matched against with data saved in a central database. Then the results can be displayed to the user. The features of an image gathered from SURF are supposed to describe it such as that it can be then identified using those. I'm not really experienced on doing that but there's always resources on the web. You'd probably wanna start with generic stuff such as Object Recognition.

祝您好运:)

这篇关于利用sift的增强现实的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆