在图像opencv中检测对象区域 [英] Detecting object regions in image opencv

查看:111
本文介绍了在图像opencv中检测对象区域的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们目前正在尝试使用OpenCV C ++版本中可用的方法来检测医疗器械图像中的对象区域.示例图像如下所示:

We're currently trying to detect the object regions in medical instruments images using the methods available in OpenCV, C++ version. An example image is shown below:

这是我们要执行的步骤:

Here are the steps we're following:

  • 将图像转换为灰度
  • 应用中值过滤器
  • 使用sobel过滤器查找边缘
  • 使用25的阈值将结果转换为二进制图像
  • 骨骼化图像以确保边缘整齐
  • 查找X个最大的连接组件

此方法非常适合图像1,结果如下:

This approach works perfectly for the image 1 and here is the result:

  • 黄色边框是检测到的已连接组件.
  • 这些矩形只是为了突出显示已连接的组件.
  • 为了获得可理解的结果,我们只是删除了完全在另一个内部的连接组件,因此最终结果是这样的:

到目前为止,一切都很好,但是另一个图像样本使我们的工作变得复杂,如下所示.

So far, everything was fine but another sample of image complicated our work shown below.

在对象下面有一条浅绿色的小毛巾,会得到以下图像:

Having a small light green towel under the objects results this image:

像之前一样过滤区域之后,我们得到了:

After filtering the regions as we did earlier, we got this:

显然,这不是我们所需要的.我们除了这样的东西之外:

Obviously, it is not what we need..we're excepting something like this:

我正在考虑将找到的最接近的连接组件聚在一起(以某种方式!),以便我们可以最大程度地减少毛巾出现的影响,但尚不知道这是否可行或之前有人尝试过这种方法?还有,有人有更好的主意来克服这种问题吗?

I'm thinking about clustering the closest connected components found(somehow!!) so we can minimize the impact of the presence of the towel, but don't know yet if it's something doable or someone has tried something like this before? Also, does anyone have any better idea to overcome this kind of problems?

谢谢.

推荐答案

这就是我尝试过的方法.

Here's what I tried.

在图像中,背景大部分呈绿色,并且背景区域比前景区域大得多.因此,如果您拍摄图像的颜色直方图,则绿色区域将具有较高的值.将此直方图阈值化,以便将具有较小值的bin设置为零.这样,我们很可能会保留绿色(较高值)的垃圾箱,并丢弃其他颜色.然后对该投影直方图进行背投影.反投影将突出显示图像中的这些绿色区域.

In the images, the background is mostly greenish and the area of the background is considerably larger than that of the foreground. So, if you take a color histogram of the image, the greenish bins will have higher values. Threshold this histogram so that bins having smaller values are set to zero. This way we'll most probably retain the greenish (higher value) bins and discard other colors. Then backproject this histogram. The backprojection will highlight these greenish regions in the image.

反投影:

  • 然后限制此反投影.这为我们提供了背景.

背景(经过某种形态过滤后):

  • 反转背景以获得前景.

前景(经过某种形态过滤后):

  • 然后找到前景的轮廓.

我认为这可以进行合理的分割,使用它作为遮罩,您可以使用GrabCut这样的分割来细化边界(我还没有尝试过).

I think this gives a reasonable segmentation, and using this as mask you may be able to use a segmentation like GrabCut to refine the boundaries (I haven't tried this yet).

我尝试了GrabCut方法,它确实完善了边界.我已经添加了GrabCut细分的代码.

I tried the GrabCut approach and it indeed refines the boundaries. I've added the code for GrabCut segmentation.

轮廓:

使用前景作为遮罩的GrabCut细分:

我将OpenCV C API用于直方图处理部分.

I'm using the OpenCV C API for the histogram processing part.

// load the color image
IplImage* im = cvLoadImage("bFly6.jpg");

// get the color histogram
IplImage* im32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 3);
cvConvertScale(im, im32f);

int channels[] = {0, 1, 2};
int histSize[] = {32, 32, 32};
float rgbRange[] = {0, 256};
float* ranges[] = {rgbRange, rgbRange, rgbRange};

CvHistogram* hist = cvCreateHist(3, histSize, CV_HIST_ARRAY, ranges);
IplImage* b = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* g = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* r = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* backproject32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 1);
IplImage* backproject8u = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplImage* bw = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplConvKernel* kernel = cvCreateStructuringElementEx(3, 3, 1, 1, MORPH_ELLIPSE);

cvSplit(im32f, b, g, r, NULL);
IplImage* planes[] = {b, g, r};
cvCalcHist(planes, hist);

// find min and max values of histogram bins
float minval, maxval;
cvGetMinMaxHistValue(hist, &minval, &maxval);

// threshold the histogram. this sets the bin values that are below the threshold to zero
cvThreshHist(hist, maxval/32);

// backproject the thresholded histogram. backprojection should contain higher values for the
// background and lower values for the foreground
cvCalcBackProject(planes, backproject32f, hist);

// convert to 8u type
double min, max;
cvMinMaxLoc(backproject32f, &min, &max);
cvConvertScale(backproject32f, backproject8u, 255.0 / max);

// threshold backprojected image. this gives us the background
cvThreshold(backproject8u, bw, 10, 255, CV_THRESH_BINARY);

// some morphology on background
cvDilate(bw, bw, kernel, 1);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_CLOSE, 2);

// get the foreground
cvSubRS(bw, cvScalar(255, 255, 255), bw);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_OPEN, 2);
cvErode(bw, bw, kernel, 1);

// find contours of the foreground
//CvMemStorage* storage = cvCreateMemStorage(0);
//CvSeq* contours = 0;
//cvFindContours(bw, storage, &contours);
//cvDrawContours(im, contours, CV_RGB(255, 0, 0), CV_RGB(0, 0, 255), 1, 2);

// grabcut
Mat color(im);
Mat fg(bw);
Mat mask(bw->height, bw->width, CV_8U);

mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, fg);

Mat bgdModel, fgdModel;
grabCut(color, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);

Mat gcfg = mask == GC_PR_FGD;

vector<vector<cv::Point>> contours;
vector<Vec4i> hierarchy;
findContours(gcfg, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
for(int idx = 0; idx < contours.size(); idx++)
{
    drawContours(color, contours, idx, Scalar(0, 0, 255), 2);
}

// cleanup ...

更新:我们可以使用C ++接口执行上述操作,如下所示.

UPDATE: We can do the above using the C++ interface as shown below.

const int channels[] = {0, 1, 2};
const int histSize[] = {32, 32, 32};
const float rgbRange[] = {0, 256};
const float* ranges[] = {rgbRange, rgbRange, rgbRange};

Mat hist;
Mat im32fc3, backpr32f, backpr8u, backprBw, kernel;

Mat im = imread("bFly6.jpg");

im.convertTo(im32fc3, CV_32FC3);
calcHist(&im32fc3, 1, channels, Mat(), hist, 3, histSize, ranges, true, false);
calcBackProject(&im32fc3, 1, channels, hist, backpr32f, ranges);

double minval, maxval;
minMaxIdx(backpr32f, &minval, &maxval);
threshold(backpr32f, backpr32f, maxval/32, 255, THRESH_TOZERO);
backpr32f.convertTo(backpr8u, CV_8U, 255.0/maxval);
threshold(backpr8u, backprBw, 10, 255, THRESH_BINARY);

kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));

dilate(backprBw, backprBw, kernel);
morphologyEx(backprBw, backprBw, MORPH_CLOSE, kernel, Point(-1, -1), 2);

backprBw = 255 - backprBw;

morphologyEx(backprBw, backprBw, MORPH_OPEN, kernel, Point(-1, -1), 2);
erode(backprBw, backprBw, kernel);

Mat mask(backpr8u.rows, backpr8u.cols, CV_8U);

mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, backprBw);

Mat bgdModel, fgdModel;
grabCut(im, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);

Mat fg = mask == GC_PR_FGD;

这篇关于在图像opencv中检测对象区域的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆