emgucv:C#中的Pan Card不当偏斜检测 [英] emgucv: pan card improper skew detection in C#

查看:113
本文介绍了emgucv:C#中的Pan Card不当偏斜检测的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有3张Pan卡图像,用于使用emgucv和c#测试图像倾斜度。


最上面的第一张图像检测到180度正常工作。


在检测到的90度之间处于中间的第二张图像应检测为180度。


在检测到的180度的检测到第三度图像上应将其检测为90度。




我想在这里分享的一个观察结果是,当我从全景显示卡的上下方向裁剪掉不需要的图像部分时i ng画笔,使用下面提到的代码可以给我预期的结果。


现在,我想了解如何使用编程来删除不需要的部分。
我玩过轮廓和roi,但我无法弄清楚如何拟合相同的值。我无法理解emgucv本身是选择轮廓还是必须做些事情。


请提出任何合适的代码示例。


请检查代码下面进行角度检测,请帮助我。

  imgInput = new Image< Bgr,byte>(impath); 
图片<灰色,字节> img2 = imgInput.Convert<灰色,字节>();
位图imgs;
图片<灰色,字节> imgout = imgInput.Convert<灰色,字节>()。Not()。ThresholdBinary(新Gray(50),新Gray(125));
VectorOfVectorOfPoint等高线= new VectorOfVectorOfPoint();
Emgu.CV.Mat hier = new Emgu.CV.Mat();
var blurdImage = imgInput.SmoothGaussian(5,5,0,0);
CvInvoke.AdaptiveThreshold(imgout,imgout,255,Emgu.CV.CvEnum.AdaptiveThresholdType.GaussianC,Emgu.CV.CvEnum.ThresholdType.Binary,5,45);

CvInvoke.FindContours(imgout,轮廓,层次,Emgu.CV.CvEnum.RetrType.External,Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
if(contours.Size> = 1)
{
for(int i = 0; i< = contours.Size; i ++)
{

Rectangle rect = CvInvoke.BoundingRectangle(contours [i]);
RotatedRect框= CvInvoke.MinAreaRect(contours [i]);
PointF []顶点= box.GetVertices();
PointF point = box.Center;
PointF edge1 = new PointF(Vertexes [1] .X-Vertices [0] .X,Vertices [1] .Y-Vertices [0] .Y);
PointF edge2 = new PointF(Vertices [2] .X-Vertices [1] .X,Vertices [2] .Y-Vertices [1] .Y);
double r = edge1.X + edge1.Y;
double edge1Magnitude = Math.Sqrt(Math.Pow(edge1.X,2)+ Math.Pow(edge1.Y,2));
double edge2Magnitude = Math.Sqrt(Math.Pow(edge2.X,2)+ Math.Pow(edge2.Y,2));
PointF primaryEdge = edge1Magnitude> edge2Magnitude? edge1:edge2;
double primaryMagnitude = edge1Magnitude> edge2Magnitude? edge1Magnitude:edge2Magnitude;
PointF引用= new PointF(1,0);
double refMagnitude = 1;
double thetaRads = Math.Acos((((primaryEdge.X * reference.X)+(primaryEdge.Y * reference.Y))/(primaryMagnitude * refMagnitude));
double thetaDeg = thetaRads * 180 / Math.PI;
imgInput = imgInput.Rotate(thetaDeg,new Bgr());
imgout = imgout.Rotate(box.Angle,new Gray());
位图bmp = imgout.Bitmap;
休息时间;
}

}


解决方案


问题


让我们从解决问题之前的问题入手:


您的代码


提交代码时,寻求帮助时,至少要做出一些努力以清理垃圾邮件。它。帮助人们帮助您!这里有太多的代码行什么都不做。您声明从未使用过的变量。添加一些注释,使人们知道您认为代码应该执行的操作。

  Bitmap imgs; 
var blurdImage = imgInput.SmoothGaussian(5,5,0,0);
Rectangle rect = CvInvoke.BoundingRectangle(contours [i]);
PointF point = box.Center;
double r = edge1.X + edge1.Y;
//等


自适应阈值


下面的代码行产生以下图像:

  CvInvoke.AdaptiveThreshold(imgout,imgout,255,Emgu.CV.CvEnum.AdaptiveThresholdType。 GaussianC,Emgu.CV.CvEnum.ThresholdType.Binary,5,45); 

图片1



图片2



图片3



显然这不是您要的目标,因为主要轮廓(卡片边缘)已完全丢失。提示,您始终可以使用以下代码在运行时显示图像,以帮助您进行调试。

  CvInvoke .NamedWindow( Output); 
CvInvoke.Imshow( Output,imgout);
CvInvoke.WaitKey();


解决方案


由于您的示例图片中的卡片主要具有相似的价值(就HSV而言)到背景。在这种情况下,我认为简单的灰度阈值化不是正确的方法。我的目的是:


算法



  1. 使用Canny Edge Detection提取图像中的边缘。




  2. 扩大边缘,使卡片内容组合。




  3. 使用轮廓检测




  4. 扭曲并裁剪图像。





代码


您可能希望尝试使用Canny Detection and Dilation(坎尼检测和膨胀)的参数。

  //工作图像
图像< Bgr,字节> imgInput =新图像< Bgr,字节>( Test1.jpg);
图片<灰色,字节> imgEdges =新图像<灰色,字节>(imgInput.Size);
图片<灰色,字节> imgDilatedEdges =新图像<灰色,字节>(imgInput.Size);
图片< Bgr,字节> imgOutput;

// 1.边缘检测
CvInvoke.Canny(imgInput,imgEdges,25,80);

// 2.膨胀
CvInvoke.Dilate(
imgEdges,
imgDilatedEdges,
CvInvoke.GetStructuringElement(
ElementShape.Rectangle,
新大小(3,3),
新点(-1,-1)),
新点(-1,-1),
5,
BorderType.Default,
new MCvScalar(0));

// 3.轮廓检测
VectorOfVectorOfPoint inputContours = new VectorOfVectorOfPoint();
Mat层级= new Mat();
CvInvoke.FindContours(
imgDilatedEdges,
inputContours,
层次结构,
RetrType.External,
ChainApproxMethod.ChainApproxSimple);
VectorOfPoint primaryContour =(从inputContours.ToList()中的轮廓开始
依次按轮廓.GetArea()降序
选择轮廓).FirstOrDefault();

// 4.角点提取
RotatedRect边界= CvInvoke.MinAreaRect(primaryContour);
PointF topLeft =(从bounding.GetVertices()中的点开始
由Math.Sqrt(Math.Pow(point.X,2)+ Math.Pow(point.Y,2))排序
选择点).FirstOrDefault();
PointF topRight =(从bounding.GetVertices()中的点开始
由Math.Sqrt(Math.Pow(imgInput.Width-point.X,2)+ Math.Pow(point.Y,2)排序)
选择点).FirstOrDefault();
PointF botLeft =(从bounding.GetVertices()中的点开始
由Math.Sqrt(Math.Pow(point.X,2)+ Math.Pow(imgInput.Height-point.Y,2)排序)
选择点).FirstOrDefault();
PointF botRight =(从bounding.GetVertices()中的点开始
由Math.Sqrt(Math.Pow(imgInput.Width-point.X,2)+ Math.Pow(imgInput.Height-point。 Y,2))
选择点).FirstOrDefault();
double boundingWidth = Math.Sqrt(Math.Pow(topRight.X-topLeft.X,2)+ Math.Pow(topRight.Y-topLeft.Y,2));
double boundingHeight = Math.Sqrt(Math.Pow(botLeft.X-topLeft.X,2)+ Math.Pow(botLeft.Y-topLeft.Y,2));
bool isLandscape = boundingWidth> boundingHeight;

// 5.将经纱标准定义为三角形
PointF [] srcTriangle = new PointF [3];
PointF [] dstTriangle = new PointF [3];
矩形投资回报率;
if(isLandscape)
{
srcTriangle [0] = botLeft;
srcTriangle [1] = topLeft;
srcTriangle [2] = topRight;
dstTriangle [0] = new PointF(0,(float)boundingHeight);
dstTriangle [1] = new PointF(0,0);
dstTriangle [2] = new PointF((float)boundingWidth,0);
ROI =新的Rectangle(0,0,(int)boundingWidth,(int)boundingHeight);
}
其他
{
srcTriangle [0] = topLeft;
srcTriangle [1] = topRight;
srcTriangle [2] = botRight;
dstTriangle [0] = new PointF(0,(float)boundingWidth);
dstTriangle [1] = new PointF(0,0);
dstTriangle [2] = new PointF((float)boundingHeight,0);
ROI =新Rectangle(0,0,(int)boundingHeight,(int)boundingWidth);
}
Mat warpMat = new Mat(2,3,DepthType.Cv32F,1);
warpMat = CvInvoke.GetAffineTransform(srcTriangle,dstTriangle);

// 6.应用经纱和裁切
CvInvoke.WarpAffine(imgInput,imgInput,warpMat,imgInput.Size);
imgOutput = imgInput.Copy(ROI);
imgOutput.Save( Output1.bmp);

使用了两种扩展方法:

 静态列表< VectorOfPoint> ToList(此VectorOfVectorOfPoint vectorOfVectorOfPoint)
{
List< VectorOfPoint>结果=新List< VectorOfPoint>();
用于(int轮廓= 0;轮廓< vectorOfVectorOfPoint.Size; Contour ++)
{
result.Add(vectorOfVectorOfPoint [contour]);;
}
返回结果;
}

静态双精度GetArea(此VectorOfPoint轮廓)
{
RotatedRect边界= CvInvoke.MinAreaRect(轮廓);
返回bounding.Size.Width * bounding.Size.Height;
}


输出





元示例



I am having three image of pan card for testing skew of image using emgucv and c#.

1st image which is on top Detected 180 degree working properly.

2nd image which is in middle Detected 90 dgree should detected as 180 degree.

3rd image Detected 180 degree should detected as 90 degree.

One observation I am having that i wanted to share here is when i crop unwanted part of image from up and down side of pan card using paint brush, it gives me expected result using below mention code.

Now i wanted to understand how i can remove the unwanted part using programming. I have played with contour and roi but I am not able to figure out how to fit the same. I am not able to understand whether emgucv itself selects contour or I have to do something.

Please suggest any suitable code example.

Please check code below for angle detection and please help me. Thanks in advance.

imgInput = new Image<Bgr, byte>(impath);
          Image<Gray, Byte> img2 = imgInput.Convert<Gray, Byte>();
          Bitmap imgs;
          Image<Gray, byte> imgout = imgInput.Convert<Gray, byte>().Not().ThresholdBinary(new Gray(50), new Gray(125));
          VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
          Emgu.CV.Mat hier = new Emgu.CV.Mat();
          var blurredImage = imgInput.SmoothGaussian(5, 5, 0 , 0);
          CvInvoke.AdaptiveThreshold(imgout, imgout, 255, Emgu.CV.CvEnum.AdaptiveThresholdType.GaussianC, Emgu.CV.CvEnum.ThresholdType.Binary, 5, 45);

          CvInvoke.FindContours(imgout, contours, hier, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
          if (contours.Size >= 1)
          {
              for (int i = 0; i <= contours.Size; i++)
              {

                  Rectangle rect = CvInvoke.BoundingRectangle(contours[i]);
                  RotatedRect box = CvInvoke.MinAreaRect(contours[i]);
                  PointF[] Vertices = box.GetVertices();
                  PointF point = box.Center;
                  PointF edge1 = new PointF(Vertices[1].X - Vertices[0].X, Vertices[1].Y - Vertices[0].Y);
                  PointF edge2 = new PointF(Vertices[2].X - Vertices[1].X, Vertices[2].Y - Vertices[1].Y);
                  double r = edge1.X + edge1.Y;
                  double edge1Magnitude = Math.Sqrt(Math.Pow(edge1.X, 2) + Math.Pow(edge1.Y, 2));
                  double edge2Magnitude = Math.Sqrt(Math.Pow(edge2.X, 2) + Math.Pow(edge2.Y, 2));
                  PointF primaryEdge = edge1Magnitude > edge2Magnitude ? edge1 : edge2;
                  double primaryMagnitude = edge1Magnitude > edge2Magnitude ? edge1Magnitude : edge2Magnitude;
                  PointF reference = new PointF(1, 0);
                  double refMagnitude = 1;
                  double thetaRads = Math.Acos(((primaryEdge.X * reference.X) + (primaryEdge.Y * reference.Y)) / (primaryMagnitude * refMagnitude));
                  double thetaDeg = thetaRads * 180 / Math.PI;
                  imgInput = imgInput.Rotate(thetaDeg, new Bgr());
                  imgout = imgout.Rotate(box.Angle, new Gray());
                  Bitmap bmp = imgout.Bitmap;
                  break;
              }

          }

解决方案

The Problem

Let us start with the problem before the solution:

Your Code

When you submit code, asking for help, at least make some effort to "clean" it. Help people help you! There's so many lines of code here that do nothing. You declare variables that are never used. Add some comments that let people know what it is that you think your code should do.

Bitmap imgs;
var blurredImage = imgInput.SmoothGaussian(5, 5, 0, 0);
Rectangle rect = CvInvoke.BoundingRectangle(contours[i]);
PointF point = box.Center;
double r = edge1.X + edge1.Y;
// Etc

Adaptive Thresholding

The following line of code produces the following images:

 CvInvoke.AdaptiveThreshold(imgout, imgout, 255, Emgu.CV.CvEnum.AdaptiveThresholdType.GaussianC, Emgu.CV.CvEnum.ThresholdType.Binary, 5, 45);

Image 1

Image 2

Image 3

Clearly this is not what you're aiming for since the primary contour, the card edge, is completely lost. As a tip, you can always use the following code to display images at runtime to help you with debugging.

CvInvoke.NamedWindow("Output");
CvInvoke.Imshow("Output", imgout);
CvInvoke.WaitKey();

The Soltuion

Since your in example images the card is primarily a similar Value (in the HSV sense) to the background. I do not think simple gray scale thresholding is the correct approach in this case. I purpose the following:

Algorithm

  1. Use Canny Edge Detection to extract the edges in the image.

  2. Dilate the edges so as the card content combines.

  3. Use Contour Detection to filter for the combined edges with the largest bounding.

  4. Fit this primary contour with a rotated rectangle in order to extract the corner points.

  5. Use the corner points to define a transformation matrix to be applied using WarpAffine.

  6. Warp and crop the image.

The Code

You may wish to experiment with the parameters of the Canny Detection and Dilation.

// Working Images
Image<Bgr, byte> imgInput = new Image<Bgr, byte>("Test1.jpg");
Image<Gray, byte> imgEdges = new Image<Gray, byte>(imgInput.Size);
Image<Gray, byte> imgDilatedEdges = new Image<Gray, byte>(imgInput.Size);
Image<Bgr, byte> imgOutput;

// 1. Edge Detection
CvInvoke.Canny(imgInput, imgEdges, 25, 80);

// 2. Dilation
CvInvoke.Dilate(
    imgEdges,
    imgDilatedEdges,
    CvInvoke.GetStructuringElement(
        ElementShape.Rectangle,
        new Size(3, 3),
        new Point(-1, -1)),
    new Point(-1, -1),
    5,
    BorderType.Default,
    new MCvScalar(0));

// 3. Contours Detection
VectorOfVectorOfPoint inputContours = new VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(
    imgDilatedEdges,
    inputContours,
    hierarchy,
    RetrType.External,
    ChainApproxMethod.ChainApproxSimple);
VectorOfPoint primaryContour = (from contour in inputContours.ToList()
                                orderby contour.GetArea() descending
                                select contour).FirstOrDefault();

// 4. Corner Point Extraction
RotatedRect bounding = CvInvoke.MinAreaRect(primaryContour);
PointF topLeft = (from point in bounding.GetVertices()
                  orderby Math.Sqrt(Math.Pow(point.X, 2) + Math.Pow(point.Y, 2))
                  select point).FirstOrDefault();
PointF topRight = (from point in bounding.GetVertices()
                  orderby Math.Sqrt(Math.Pow(imgInput.Width - point.X, 2) + Math.Pow(point.Y, 2))
                  select point).FirstOrDefault();
PointF botLeft = (from point in bounding.GetVertices()
                  orderby Math.Sqrt(Math.Pow(point.X, 2) + Math.Pow(imgInput.Height - point.Y, 2))
                  select point).FirstOrDefault();
PointF botRight = (from point in bounding.GetVertices()
                   orderby Math.Sqrt(Math.Pow(imgInput.Width - point.X, 2) + Math.Pow(imgInput.Height - point.Y, 2))
                   select point).FirstOrDefault();
double boundingWidth = Math.Sqrt(Math.Pow(topRight.X - topLeft.X, 2) + Math.Pow(topRight.Y - topLeft.Y, 2));
double boundingHeight = Math.Sqrt(Math.Pow(botLeft.X - topLeft.X, 2) + Math.Pow(botLeft.Y - topLeft.Y, 2));
bool isLandscape = boundingWidth > boundingHeight;

// 5. Define warp crieria as triangles              
PointF[] srcTriangle = new PointF[3];
PointF[] dstTriangle = new PointF[3];
Rectangle ROI;
if (isLandscape)
{
    srcTriangle[0] = botLeft;
    srcTriangle[1] = topLeft;
    srcTriangle[2] = topRight;
    dstTriangle[0] = new PointF(0, (float)boundingHeight);
    dstTriangle[1] = new PointF(0, 0);
    dstTriangle[2] = new PointF((float)boundingWidth, 0);
    ROI = new Rectangle(0, 0, (int)boundingWidth, (int)boundingHeight);
}
else
{
    srcTriangle[0] = topLeft;
    srcTriangle[1] = topRight;
    srcTriangle[2] = botRight;
    dstTriangle[0] = new PointF(0, (float)boundingWidth);
    dstTriangle[1] = new PointF(0, 0);
    dstTriangle[2] = new PointF((float)boundingHeight, 0);
    ROI = new Rectangle(0, 0, (int)boundingHeight, (int)boundingWidth);
}
Mat warpMat = new Mat(2, 3, DepthType.Cv32F, 1);
warpMat = CvInvoke.GetAffineTransform(srcTriangle, dstTriangle);

// 6. Apply the warp and crop
CvInvoke.WarpAffine(imgInput, imgInput, warpMat, imgInput.Size);
imgOutput = imgInput.Copy(ROI);
imgOutput.Save("Output1.bmp");

Two extension methods are used:

static List<VectorOfPoint> ToList(this VectorOfVectorOfPoint vectorOfVectorOfPoint)
{
    List<VectorOfPoint> result = new List<VectorOfPoint>();
    for (int contour = 0; contour < vectorOfVectorOfPoint.Size; contour++)
    {
        result.Add(vectorOfVectorOfPoint[contour]);
    }
    return result;
}

static double GetArea(this VectorOfPoint contour)
{
    RotatedRect bounding = CvInvoke.MinAreaRect(contour);
    return bounding.Size.Width * bounding.Size.Height;
}

Outputs

Meta Example

这篇关于emgucv:C#中的Pan Card不当偏斜检测的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆