如何应用iOS VNImageHomographicAlignmentObservation warpTransform? [英] How to apply iOS VNImageHomographicAlignmentObservation warpTransform?

查看:76
本文介绍了如何应用iOS VNImageHomographicAlignmentObservation warpTransform?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在测试Apple的Vision Alignment API,并且对VNHomographicImageRegistrationRequest有疑问.有人有工作吗?我可以从其中获得warpTransform,但是我还没有看到一个有意义的矩阵,也就是说,我无法获得将图像扭曲回到源图像的结果.我正在使用Opencv warpPerspective来处理变形.

I'm testing Apple's Vision Alignment API and have questions regarding VNHomographicImageRegistrationRequest. Has anyone got it to work? I can get the warpTransform out of it, but I've yet to see a matrix that makes sense, meaning, I'm unable to get a result that warps the image back onto the source image. I'm using Opencv warpPerspective to handle the warping.

我称其为转换:

class func homography(_ cgImage0 : CGImage!, _ cgImage1 : CGImage!, _ orientation : CGImagePropertyOrientation, completion:(matrix_float3x3?)-> ())
{
let registrationSequenceReqHandler = VNSequenceRequestHandler()
let requestHomography = VNHomographicImageRegistrationRequest(targetedCGImage: cgImage1, orientation: orientation)
let requestTranslation = VNTranslationalImageRegistrationRequest(targetedCGImage: cgImage1, orientation: orientation)

do
{
    try registrationSequenceReqHandler.perform([requestHomography, requestTranslation], on: cgImage0)  //reference

    if let resultH = requestHomography.results?.first as? VNImageHomographicAlignmentObservation
    {
        completion(resultH.warpTransform)
    }

    if let resultT = requestTranslation.results?.first as? VNImageTranslationAlignmentObservation
    {
        print ("translation : \(resultT.alignmentTransform.tx) : \(resultT.alignmentTransform.ty)")
    }
}
catch
{
    completion(nil)
    print("bad")
}

}

这可以工作并输出单应矩阵,但是其结果与我进行SIFT + Opencv findHomography(

This works and outputs a homography matrix, but its results are drastically different than what I get when I do SIFT + Opencv findHomography (https://docs.opencv.org/3.0-beta/doc/tutorials/features2d/feature_homography/feature_homography.html)

无论我的图像对如何,我都无法从Apple Vision数据集中获得合理的单应性结果.

Regardless of my image pairs, I'm unable to get reasonable homographic results from the Apple Vision dataset.

预先感谢

推荐答案

为了将来参考,我能够将Apple的单应矩阵与Opencv矩阵相关联.基本上,Core Image的图像原点是图像的左下角.Opencv的原点在左上角.要将Core Image的单应性矩阵转换为Opencv坐标,需要应用以下转换:

For future reference, I was able to correlate Apple's homography matrix with Opencv's matrix. Basically, the Core Image's image origin is the bottom left-hand corner of the image. Opencv's origin is the top left-hand corner. To convert Core Image's homography matrix to Opencv coordinates, one needs to apply the following transform:

H_opencv = Q * H_core_image * Q

where Q = [1 0 0; 0 -1 image.height; 0 0 1]

更新为注释:

我将Q定义为行主矩阵.

I defined Q as a row-major matrix.

Apple的simd矩阵主要为列.Opencv的矩阵是行主要的.为了使以上等式起作用,您可能必须使用Q的转置.

Apple's simd matrices are column-major. Opencv's matrices are row-major. To get the above equation to work, you may have to use the transpose of Q.

这篇关于如何应用iOS VNImageHomographicAlignmentObservation warpTransform?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆