增强现实使用光学字符识别 [英] Augmented reality with Optical character recognition
问题描述
我已经实现了使用高通的vuforia库增强现实程序。现在我想光学字符识别功能添加到我的程序,这样我可以从一种语言翻译到另一种实时的文本。我打算使用的tesseract OCR库。但我的问题是我如何与QCAR集成的tesseract?
可一些身体建议我适当的方式做到这一点?
I have implemented Augmented reality program using Qualcomm's vuforia library. Now I want to add Optical character recognition feature to my program so that i can translate the text from one language to another in real time. I am planning to use Tesseract OCR library. But my question is How do i Integrate Tesseract with QCAR? can some body suggest me proper way to do it?
推荐答案
您需要的是摄像头帧的访问,所以你可以将它们发送给的tesseract。该Vuforia SDK提供了一种使用 QCAR :: UpdateCallback
界面(访问框架文件的这里)。
What you need is an access to the camera frames, so you can send them to Tesseract. The Vuforia SDK offers a way to access the frames using the QCAR::UpdateCallback
interface (documentation here).
您需要做的是创建一个实现此协议的类,使用 QCAR其注册到Vuforia SDK :: registerCallback()
(见的这里),并从那里你会的Vuforia SDK已经处理一帧每次收到通知。
What you need to do is create a class that implements this protocol, register it to the Vuforia SDK using the QCAR::registerCallback()
(see here), and from there you'll get notified each time the Vuforia SDK has processed a frame.
这个回调将提供一个 QCAR ::国家
对象,从中你可以访问摄像机架(见 QCAR的文档::国家:: getFrame()方法
这里)和它发送到的tesseract SDK。
This callback will be provided a QCAR::State
object, from which you can get access to the camera frame (see the doc for QCAR::State::getFrame()
here), and send it to the Tesseract SDK.
但是意识到这一点的Vuforia SDK适用于框架在一个相当低的分辨率(对我测试过的手机,它返回在360x240到720x480的帧范围,而且往往前者的事实比后者),这可能是不够准确的tesseract来检测文本
But be aware of the fact that the Vuforia SDK works with frames in a rather low resolution (on many phones I tested, it returns frames in the 360x240 to 720x480 range, and more often the former than the latter), which may not be accurate enough for Tesseract to detect text.
这篇关于增强现实使用光学字符识别的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!