MATLAB中的拉普拉斯图像滤波与锐化图像 [英] Laplacian Image Filtering and Sharpening Images in MATLAB

查看:852
本文介绍了MATLAB中的拉普拉斯图像滤波与锐化图像的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图翻译Gonzalez和Woods(第2版)中关于拉普拉斯滤波器的内容。



我已经在图像中读到并创建了过滤。但是,当我尝试显示结果时(通过减法,因为-ve中的中心元素),我没有得到教科书中的图像。



I认为主要原因是缩放。但是,我不确定如何做到这一点。据我所知,一些在线资源表示缩放只是为了使值在0-255之间。从我的代码中,我看到值已经在该范围内。



我真的很感激任何指针。



下面是我使用的原始图像:





下面是我的代码,以及由此产生的锐化图像。



谢谢!

  CLC; 
全部收盘;
a = rgb2gray(imread('e:\ mymoon.png'));
lap = [1 1 1; 1 -8 1; 1 1 1];
resp = uint8(filter2(lap,a,'same'));
sharpened = imsubtract(a,resp);
数字;
subplot(1,3,1); imshow(a);标题('原始图像​​');
subplot(1,3,2); imshow(resp);标题('拉普拉斯过滤图像');
subplot(1,3,3); imshow(sharpened);标题('Sharpened image');

解决方案

我有一些提示给你:


  1. 这只是一个但是 filter2 执行关联。实际上,您需要执行卷积,在执行像素邻域和内核之间的加权和之前,将内核旋转180度。但是因为内核是对称的,所以


    I am trying to "translate" what's mentioned in Gonzalez and Woods (2nd Edition) about the Laplacian filter.

    I've read in the image and created the filter. However, when I try to display the result (by subtraction, since the center element in -ve), I don't get the image as in the textbook.

    I think the main reason is the "scaling". However, I'm not sure how exactly to do that. From what I understand, some online resources say that the scaling is just so that the values are between 0-255. From my code, I see that the values are already within that range.

    I would really appreciate any pointers.

    Below is the original image I used:

    Below is my code, and the resultant sharpened image.

    Thanks!

    clc;
    close all;
    a = rgb2gray(imread('e:\moon.png'));
    lap = [1 1 1; 1 -8 1; 1 1 1];
    resp = uint8(filter2(lap, a, 'same'));
    sharpened = imsubtract(a, resp);
    figure; 
    subplot(1,3,1);imshow(a); title('Original image');
    subplot(1,3,2);imshow(resp); title('Laplacian filtered image');
    subplot(1,3,3);imshow(sharpened); title('Sharpened image');
    

    解决方案

    I have a few tips for you:

    1. This is just a little thing but filter2 performs correlation. You actually need to perform convolution, which rotates the kernel by 180 degrees before performing the weighted sum between neighbourhoods of pixels and the kernel. However because the kernel is symmetric, convolution and correlation perform the same thing in this case.
    2. I would recommend you use imfilter to facilitate the filtering as you are using methods from the Image Processing Toolbox already. It's faster than filter2 or conv2 and takes advantage of the Intel Integrated Performance Primitives.
    3. I highly recommend you do everything in double precision first, then convert back to uint8 when you're done. Use im2double to convert your image (most likely uint8) to double precision. When performing sharpening, this maintains precision and prematurely casting to uint8 then performing the subtraction will give you unintended side effects. uint8 will cap results that are negative or beyond 255 and this may also be a reason why you're not getting the right results. Therefore, convert the image to double, filter the image, sharpen the result by subtracting the image with the filtered result (via the Laplacian) and then convert back to uint8 by im2uint8.


    You've also provided a link to the pipeline that you're trying to imitate: http://www.idlcoyote.com/ip_tips/sharpen.html

    The differences between your code and the link are:

    1. The kernel has a positive centre. Therefore the 1s are negative while the centre is +8 and you'll have to add the filtered result to the original image.
    2. In the link, they normalize the filtered response so that the minimum is 0 and the maximum is 1.
    3. Once you add the filtered response onto the original image, you also normalize this result so that the minimum is 0 and the maximum is 1.
    4. You perform a linear contrast enhancement so that intensity 60 becomes the new minimum and intensity 200 becomes the new maximum. You can use imadjust to do this. The function takes in an image as well as two arrays - The first array is the input minimum and maximum intensity and the second array is where the minimum and maximum should map to. As such, I'd like to map the input intensity 60 to the output intensity 0 and the input intensity 200 to the output intensity 255. Make sure the intensities specified are between 0 and 1 though so you'll have to divide each quantity by 255 as stated in the documentation.

    As such:

    clc;
    close all;
    a = im2double(imread('moon.png')); %// Read in your image
    lap = [-1 -1 -1; -1 8 -1; -1 -1 -1]; %// Change - Centre is now positive
    resp = imfilter(a, lap, 'conv'); %// Change
    
    %// Change - Normalize the response image
    minR = min(resp(:));
    maxR = max(resp(:));
    resp = (resp - minR) / (maxR - minR);
    
    %// Change - Adding to original image now
    sharpened = a + resp;
    
    %// Change - Normalize the sharpened result
    minA = min(sharpened(:));
    maxA = max(sharpened(:));
    sharpened = (sharpened - minA) / (maxA - minA);
    
    %// Change - Perform linear contrast enhancement
    sharpened = imadjust(sharpened, [60/255 200/255], [0 1]);
    
    figure; 
    subplot(1,3,1);imshow(a); title('Original image');
    subplot(1,3,2);imshow(resp); title('Laplacian filtered image');
    subplot(1,3,3);imshow(sharpened); title('Sharpened image');
    

    I get this figure now... which seems to agree with the figures seen in the link:

    这篇关于MATLAB中的拉普拉斯图像滤波与锐化图像的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆