对象检测ARKit与CoreML [英] Object detection ARKit vs CoreML
问题描述
我正在为iPhone构建ARKit
应用程序.我需要检测特定的香水瓶并根据检测到的内容显示内容.我使用了来自developer.apple.com的演示应用程序来扫描现实世界的对象并导出可以在资产中使用的.arobject
文件.尽管由于瓶子是从玻璃杯中检测出来的,所以效果很好,但是效果很好.它仅在2-30秒范围内进行扫描或根本不进行扫描的位置进行检测.合并扫描并不能改善情况,这会使情况变得更糟.合并结果可能会有奇怪的方向.
I am building ARKit
application for iPhone. I need to detect specific perfume bottle and display content depending on what is detected. I used demo app from developer.apple.com to scan real world object and export .arobject
file which I can use in assets. It's working fine, although since bottle is from glass detection is very poor. It detects only in location where scan was made in range from 2-30 seconds or doesn't detect at all. Merging of scans doesn't improve situation, something making it even worse. Merged result may have weird orientation.
该怎么办?
如果什么都没有,CoreML
会帮助我吗?我可以拍很多照片并教模特.如果我要检查每个框架与该型号的匹配情况怎么办?这样的办法有机会吗?
If nothing, will CoreML
help me? I can make a lot of photos and teach model. What if I'll check each frame for match with this model? Does such approach have any chance?
推荐答案
由于玻璃折射现象和不同的照明条件,香水瓶的对象识别过程(在ARKit和CoreML中)是最复杂的过程.
Due to glass refraction phenomenon and different lighting conditions an object recognition process (in ARKit and CoreML) for perfume bottles is the most sophisticated one.
看下面的图片–在不同位置有三个玻璃球:
Look at the following picture – there are three glass balls at different locations:
这些玻璃球具有不同的菲涅耳IOR(折射率),环境,相机的视点,尺寸和照明条件.但是它们具有相同的形状,材料和颜色.
These glass balls have different Fresnel's IOR (Index Of Refraction), environment, camera's Point Of View, size and lighting conditions. But they have the same shape, material and colour.
因此,加快识别过程的最佳方法是使用相同的背景/环境(例如单色浅灰色BG纸),相同的照明条件(光线的位置,强度,颜色和方向),良好的形状易读性(由于镜面高光)和相机的POV相同.
So, the best way to speed up a recognition process is to use identical background/environment (for example monochromatic light-grey paper BG), the same lighting condition (location, intensity, color, and direction of the light), good shape's readability (thanks to specular highlights) and the same POV for your camera.
我知道,有时无法遵循这些提示,但是这些提示正在起作用.
I know, sometimes it's impossible to follow these tips but these ones are working.
这篇关于对象检测ARKit与CoreML的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!