Vous êtes sur la page 1sur 13

2013

Medical Imaging BMEN90021


Report _Lab 1
Warda . Suzan Maleki ID : 519973 ID : 563744

Suzan Maleki Hewlett-Packard [Pick the date]

Section A:
A.1 Include your constructed initials image.

Initials Image, Section A

A.2 Include the Sobel images in the x- and y-directions. Describe their information content.

Sobel edges in x direction, Section A 4 3 2 1 0 -1 -2 -3 -4

Sobel edges in y direction, Section A 4 3 2 1 0 -1 -2 -3 -4

We can take advantage of colour bar to describe the information content of the images properly. In this way the value of black colour is -4, white is +4, and gray is 0. As illustrated we have convolved the Sobel filter to our image once in X and once in Y direction. The Sobel operator detected the edges of the image in each X and Y direction separately as given in the figures above. The edges appear darker when going from dark to light and they appear white when going from light to dark. The effect can be explained best by consider the following two images I1 and I2.

I1 = [

I2 = [

When I1 is convolved with

] , the resulting gradient in x direction is -4. Hence,

whenever the transition will be from darker to lighter region, the gradient will be negative. Similarly, when I2 is convolved with Sfx, the resulting gradient in x direction is 4. Therefore, any transition from light to dark in an image will give a positive gradient. This is evident from the images presented above.

A.3 Include the Sobel magnitude (SobelLetterEdges) image.

Sobel edges, Section A

Edges by the edge funcion, Section A

A.4 What mathematical functions do the Sobel filters approximate, and why/how? The Sobel filters calculate the gradient information of signal intensity both in x and y direction by using discrete differential functions.Since the edges of the objects in the image contain significant changes in intensity values, hence Sobel filters act as edge detectors in both x and y direction. These filter matrices are given as [ ] [ ]

These filters can be convolved with the image to get the edges of the objects. First we need to convert the image to grayscale representation of its brightness and then apply the edge function. The Sobel edge function is responsible for detecting and returning those points of an

image with the maximum gradient at the edges. Edge detection is important in image processing which detects the areas with sharp changes in brightness. The Sobel operator uses a pair of 3*3 matrices (as mentioned before)and convolve them to the image due to estimating the changes both in X and Y directions. Therefore, this function computes the directional changes of our image intensity at every single point by estimating the direction of the maximum possible increase from light to dark or vice versa. This operator only needs 8 points around a point of the image to calculate the result.

5. Extension: Investigate use of the inbuilt edge function applied to the initials image. What algorithm does edge implement, and why does it work so well? The MATLB edge function implements the following code to find the edges of an image using a sobel filter.
if strcmp(method,'sobel') op = fspecial('sobel')/8; % Sobel approximation to derivative x_mask = op'; % gradient in the X direction y_mask = op; scale = 4; % for calculating the automatic threshold offset = [0 0 0 0]; % offsets used in the computation of the threshold % compute the gradient in x and y direction bx = imfilter(a,x_mask,'replicate'); by = imfilter(a,y_mask,'replicate'); % compute the magnitude b = kx*bx.*bx + ky*by.*by; % determine the threshold; see page 514 of "Digital Imaging Processing" by % William K. Pratt if isempty(thresh), % Determine cutoff based on RMS estimate of noise % Mean of the magnitude squared image is a % value that's roughly proportional to SNR cutoff = scale*mean2(b); thresh = sqrt(cutoff); else % Use relative tolerance specified by the user cutoff = (thresh).^2; end if thinning e = computeEdgesWithThinning(b,bx,by,kx,ky,offset,cutoff); else e = b > cutoff; end

Section B:
B.1 Include the grayscale ImgPM image.
The grayscale image of the PM, Section B

B.2 Include the filtered FiltImgPM image.


The Filtered image of the PM with a gaussian filter of size 5 and 2 , Section B

B.3 Include the two downsampled images at the same pixel size as the original image. i.e. Downsampled images should take the space as the original image.

The downsampled image of the PM, Section B

Downsampled and filtered image of the PM, Section B

B.4 What is the result of downsampling with / without smoothing? Why is one better or worse than the other and how does smoothing before downsampling help or hinder? Downsampling in signal processing means decreasing the rate of sampling in a signal due to reducing the data size. Since downsampling does not change the bandwidth of a signal and just reduces the rate of sampling, it is recommended to apply a low pass filter before downsampling to avoid aliasing.Alias occurs because any frequency above the half of sampling frequency (fs/2) cannot be distinguishedfrom a lower frequency which results in overlapping the adjacent data. Therefore, reconstruction of the image without smoothing or low-pass filtering produces alias more than the original data (component). The total process of applying a low pass filter and then downsampling is known as decimation. Downsampling without smoothing results in more pronounced edges in the image and the image appears more pixelated (left Image). Smoothing before downsampling results in less sharp edges and the Image appears more smoothed and clear (right image).In our opinion, it is better approach to do smoothing before downsampling as it helps in making the image more smoothed.

X (f) =

dt

X (f) = 0 for all IfI> B


B : limited bandwidth and must be < : sampling frequency

B.5 How can downsampling be done in the frequency domain? Regarding the command below: S = downsample (w, n) We are reducing the sampling rate of w by maintaining every n-th sample starting with the first sample where the w data set could be either a vector or matrix. We can also add phase to the command to select the downsampled sequence by an integer factor. This phase ranges from (0 n-1). S = downsample (w, n, phase) It should be considered that downsampling factor must be an integer or rational fraction bigger than one. This factor divides the sampling rate or correspondingly multiplies the sampling time by n.

???

Section C:
C.1 Include all images displayed as per the instructions above.
The grayscale noisy brain image, Section C

Filtered brain Image with a Gaussain Filter of size 20 and 4

Unsharped Masked Image for = 2 Section C

Unsharped Masked Image for = 5 Section C

C.2What is the computed SNR for each image?

The computed SNR for each image is as follows: Image a. Original b. Gaussian smoothed c. Unsharp masked, = 2 d. Unsharp masked, = 5 SNR value 13.1797 36.7165 04.3057 02.1249

C.3 How do the SNR values compare between images? Give reasons for differences in SNR values. A comparison between different images shows that the Gaussian smoothed image has a highest SNR because of the removal of the high frequency noise from the original image. The Unsharped masked images have low SNR values than the original image.

The difference between the original image and the filtered image gives the actual noise present in the image which is then weighted by the factor > 1. The increased noise is then added to the original image that results in decreased SNR. The lowest SNR is for =5 because the noise is weighted more in this case. Therefore the original image has the higher SNR than both Unsharped masked images.

Section D:
D.1 Include the original image as a figure, along with the magnitude spectrum in both its full and 200 x 200 zoomed version.
Grayscale textured Image, Section D

Fourier transform of the textured Image zoomed in at the centre, Section D

Fourier transform of the textured Image, Section D

D.2 Explain the magnitude spectrum pattern in the central 200 x 200 pixel block. Describe the frequency content of the original image. Regarding the frequency content in the original image, the centre is composed of low frequency information while the corners have high frequency information. In Matlab programming we shift 2D fft to locate the low frequency in the middle of the image. In this way we can shift the (zero frequency) to centre of spectrum which is appropriate for visualizing the Fourier transform. In the zoomed in image of the fourier transform of the image, the bright centre pixel encircled in yellow, gives information about the DC component present in the image. The bright pixels on y-axis, as highlighted by purple circles, give information about the presence of sinusoidal patterns along y direction of the image. Similarly, the bright pixels on x-axis (enclosed in green circles), indicates sinusoidal pattern along x direction of the image. The square shape of the fourier transform indicates that the product term like sin(ax) x sin(by) are present in the original image. Two such squares are highlighted with red rectangles. The grainy look of the Fourier transform in the rest of the image is indicative of noise present in the Texture Image.

D.3 Include the smoothed image as a figure, along with the magnitude spectrum of the smoothed image in both its full and 200 x 200 zoomed version.

Filterd textured Image, Section D

Fourier transform of the smoothed textured Image zoomed in at the centre, Section D

Fourier transform of the smoothed textured Image, Section D

D.4 Explain the magnitude spectrum pattern of the smoothed image in the central 200 x 200 pixel block. Describe the frequency content of the smoothed image. How and why does it differ to the frequency content of the original image?

Fourier transform of the smoothed textured Image, Section D

Filterd textured Image, Section D

Vous aimerez peut-être aussi