iOS从openCV cv :: Point获取CGPoint [英] iOS get CGPoint from openCV cv::Point

查看:1036
本文介绍了iOS从openCV cv :: Point获取CGPoint的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

>



上面的图像,我们可以看到在图像上绘制点,通过一些 openCV算法





我没有得到我将如何访问这些点使我可以添加 uiview 点。



我试图读取 cv :: Point ,但值只与坐标高度和宽度不同(更多)。

  static cv :: Mat drawSquares(cv :: Mat& image,const std :: vector< std :: vector< cv :: Point>& squares)
{

int max_X = 0,max_Y = 0;
int min_X = 999,min_Y = 999;
for(size_t i = 0; i< squares.size(); i ++)
{
const cv :: Point * p =& squares [i] [0]
int n =(int)squares [i] .size();

NSLog(@Squares%d%d%d,n,p-> x,p-> y);

折线(image,& p,& n,1,true,cv :: Scalar(0,255,0),3,cv :: LINE_AA);

}


return image;
}

在上面的代码中, drawsquare 方法绘制正方形。我有 NSLog 点x,y坐标,但这些值不会写入设备坐标系。



有人可以帮助我如何实现或替代我的要求。



感谢



实际上,由于图片大小,坐标以不同的方式映射。如果图像大小在屏幕的边界内,那么没有问题,你可以直接使用cvPoint作为CGPoint,



但是如果情况是图像大小是3000 * 2464



下面是我从互联网得到的方法,它帮助我从cvPoint中提取CGPoint当大小



<$>

P $ p> - (CGFloat)contentScale
{
CGSize imageSize = self.image.size;
CGFloat imageScale = fminf(CGRectGetWidth(self.bounds)/imageSize.width,CGRectGetHeight(self.bounds)/imageSize.height);
return imageScale;
}

假设这是cvPoint(_pointA变量)

  tmp = CGPointMake((_ pointA.frame.origin.x)/ scaleFactor,(_pointA.frame.origin.y)/ scaleFactor); 


In the above image ,we can see point which are drawn on image ,by some openCV algorithm.

I want to draw a UIView point on those points ,so that user can crop it.

I am not getting how will I access those points so that i can add uiview points.

I tried to read the cv::Point ,but value are just different(more) to the co-ordinate height and width.

static cv::Mat drawSquares( cv::Mat& image, const std::vector<std::vector<cv::Point> >& squares )
{

    int max_X=0,max_Y=0;
    int min_X=999,min_Y=999;
    for( size_t i = 0; i < squares.size(); i++ )
    {
        const cv::Point* p = &squares[i][0];
        int n = (int)squares[i].size();

        NSLog(@"Squares%d %d %d",n,p->x,p->y);

        polylines(image, &p, &n, 1, true, cv::Scalar(0,255,0), 3, cv::LINE_AA);

    }


    return image;
}

In above code ,drawsquare method draw the squares .I have NSLog the point x, y co-ordinates but these values are not w.r.t to device co-ordinate system.

Can someone help me how it can be achieved Or an alternative to my requirement.

Thanks

解决方案

Actually Due to image size,the co-ordinates are map in different way,

For eg. If image size is within the boundary of screen then there is no issues,you can directly use the cvPoint as CGPoint,

But if case is that image size is 3000*2464 which is approx size of camera clicked image then u have apply some formula.

Below is the way i got from internet and it helped me to extract CGPoint from cvPoint when the size of image is more den our screen dimension

Get the scale Factor of image

- (CGFloat) contentScale
{
CGSize imageSize = self.image.size;
CGFloat imageScale = fminf(CGRectGetWidth(self.bounds)/imageSize.width, CGRectGetHeight(self.bounds)/imageSize.height);
return imageScale;
 }

Suppose this is cvPoint (_pointA variable) u have then by using the below formula u can extract it.

tmp = CGPointMake((_pointA.frame.origin.x) / scaleFactor, (_pointA.frame.origin.y) / scaleFactor);

这篇关于iOS从openCV cv :: Point获取CGPoint的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆