Thursday, September 23, 2010

Activity 13 - Color Image Segmentation

In this activity, a region of interest was chosen from the rest of the image though image segmentation. In this case, colored region of interest were segmented. A monochromatic region was cropped out from an image and then the RGB values were determined and transformed into its normalized chromaticity coordinates. Per pixel, let I = R + G + B. The normalized chromaticity coordinates r, g, b is give by

r = R/I; g = G/I; b = B/I
Equation 1

Since r + g + b = 1, we can express b = 1 - r - g showing the dependence of b on the values of r and g. Therefore, the normalized chromaticity coordinates can be represented by two coordinates with r values on the y axis and g values on the x axis, for this case.

In this activity, parametric and nonparametric methods were used to segment a colored region of interest. In the parametric method, the probability that a pixel belongs to a color distribution of interest is determined. The histogram of the subimage of the region of interest is calculated. Assuming independent Gaussian distribution for r and g, the joint probability of the r and g values will produce the segmented region of interest. The following equation defines the probability that a pixel with chromaticity r belongs to the ROI. Similar equation will be used for the g values.

Equation 2

On the other hand, the nonparametric method used the Histogram Projection technique to segment the region of interest. Figure1 to 3 show three different images with different color of region of interest. As observed, the nonparametric method showed more quality in color segmentation of the desired region of interest.



(a)
(b)
(c)
(d)
Figure 1. (a) An image of group of friends surrounded by water.
(b) Segmented using the Parametric Method.
(c) Segmented using the Nonparametric Method.
(d) Cropped Color of interest and location in the chromaticity coordinates.




(a)
(b)
(c)
(d)
Figure 2. (a) An image of mount of ice cream.
(b) Segmented using the Parametric Method.
(c) Segmented using the Nonparametric Method.
(d) Cropped Color of interest and location in the chromaticity coordinates.



(a)
(b)
(c)
(d)
Figure 3. (a) An image of a child holding a red toy.
(b) Segmented using the Parametric Method.
(c) Segmented using the Nonparametric Method.
(d) Cropped Color of interest and location in the chromaticity coordinates.

Self-Evaluation:
Credits:

Appendix: Source Code
stacksize(100000000);

PATH = 'C:\Users\Micent\Documents\Applied Physics\acad 2010-2011 1st sem\App Phy 186 - Instrumentation Physics II\Activity 13\';
ROI = 'melissa';
subimage = 'red';

//region of interest
I1 = imread(strcat([PATH,ROI,'.jpg']));
[r,c] = size(I1);
I1 = double(I1);
R1 = I1(:,:,1);
G1 = I1(:,:,2);
B1 = I1(:,:,3);

Int1 = R1 + G1 + B1;

Int1(find(Int1==0))=100000;

r1 = R1./Int1;
g1 = G1./Int1;

//subregion of ROI
I2 = imread(strcat([PATH,subimage,'.jpg']));
I2 = double(I2);
R2 = I2(:,:,1);
G2 = I2(:,:,2);
B2 = I2(:,:,3);

Int2 = R2 + G2 + B2;

Int2(find(Int2==0))=100000;

r2 = R2./Int2;
g2 = G2./Int2;


//Parametric Segmentation
ur = mean(r2); ug = mean(g2);
sr = stdev(r2); sg = stdev(g2);

pr = (1/(sr*sqrt(2*%pi)))*exp(-(r1-ur).^2/(2*sr^2));
pg = (1/(sg*sqrt(2*%pi)))*exp(-(g1-ug).^2/(2*sg^2));
I3 = pr.*pg; //segmented image

//imwrite(I3,strcat([PATH,ROI,'_',subimage,'_parametric','.jpg']));


//Non-parametric Segmentation
// Histogram Plot
BINS = 256;
rint = round(r2*(BINS-1) + 1);
gint = round(g2*(BINS-1) + 1);

colors = gint(:) + (rint(:)-1)*BINS;

hist = zeros(BINS,BINS);

for row = 1:BINS
for col = 1:(BINS-row+1)
hist(row,col) = length(find(colors==(((col + (row-1)*BINS)))));
end;
end;
scf(1); imshow(hist,[]);
scf(2); mesh(hist);
xs2bmp(1,strcat([PATH,subimage,"_histogram','.bmp']));

//Histogram Backprojection
I4 = zeros(r,c); //segmented image

r1B = r1*(BINS-1); //uniform
g1B = g1*(BINS-1); //uniform
//Pixel-per-Backprojection
for i = 1:r
for j = 1:c
m = round(r1B(i,j)) + 1;
n = round(g1B(i,j)) + 1;
I4(i,j) = hist(m,n);
end
end

imwrite(I4,strcat([PATH,ROI,'_',subimage,'_nonparametric','.jpg']));

Activity 12 - Color Camera Processing


An array of pixels each having red, green and blue light overlaid in various proportions produces a colored digital image. The integral of the product of the spectral power distribution of the incident light source S(λ), the surface reflectance ρ(λ) and the spectral sensitivity of the camera η(λ) is the color captured per pixel by a digital color camera. If the color camera has spectral sensitivity ηR(λ), ηG(λ) , ηΒ(λ) for the red, green and blue channels respectively, then the color of the pixel is given by

Equation 1

where Ki is a balancing constant equal to the inverse of the camera output when shown a white object (ρ(λ) = 1.0). These two set of equations explain why sometimes the colors captured by a digital camera appear unsatisfactory.

Equation 2

Nowadays, colored digital cameras have increasing number of options and settings that can be set to the camera to capture the appropriate or desired conditions. One particular setting in the camera that affects the appearance or appreciation of the captured image is the White Balance Setting or WB. Different setting conditions may be applied such as sunny, cloudy, incandescent, fluorescent lamp. For beginners, most of the time, they set the WB to an automatic setting. In this activity, two popular algorithms were imposed to the images to achieve automatic white balancing.

In the White Patch Algorithm, an image is captured using an unbalanced camera and use the RGB values of a known white object divider. On the other hand, the Gray World Algorithm assumes that the average color of the world is gray. The balancing constant is determined by calculating the average red, green, and blue value of the captured image and utilize them.

Figure 1 shows an image of an ensemble of colorful objects taken under different white balanced conditions. The images were taken with an EV of -2 to ensure that the processed images will not go beyond the maximum pixel value. The two algorithms were implemented and it was observed that the White Patch Algorithm produced the more accurate and pleasing appearance of the wrongly balanced captured image. The two algorithms were also done to an image with the same hue (see Figure 2).


(a) (Left)Taken under Cloudy White Balance.
(Middle) Corrected with White Patche Algorithm.
(Right) Corrected with Gray World Algorithm.
(a) (Left)Taken under Incandescent White Balance.
(Middle) Corrected with White Patche Algorithm.
(Right) Corrected with Gray World Algorithm.
(a) (Left)Taken under SunnyWhite Balance.
(Middle) Corrected with White Patche Algorithm.
(Right) Corrected with Gray World Algorithm.
Figure 1. Ensemble of objects with different hues

(a) (Left)Taken under Cloudy White Balance.
(Middle) Corrected with White Patche Algorithm.
(Right) Corrected with Gray World Algorithm.
(b) (Left)Taken under Incandescent White Balance.
(Middle) Corrected with White Patche Algorithm.
(Right) Corrected with Gray World Algorithm.
(c) (Left)Taken under Sunny White Balance.
(Middle) Corrected with White Patche Algorithm.
(Right) Corrected with Gray World Algorithm.
Figure 2. Ensemble of objects with the same hue

Self-evaluation:
Credits:

Appendix: Source code
stacksize(100000000);

PATH = 'C:\Users\Micent\Documents\Applied Physics\acad 2010-2011 1st sem\App Phy 186 - Instrumentation Physics II\Activity 12\';
name = 'EV-2_incandescent'; // automatic, cloudy, fluorescent, incandescent, sunny

I = imread(strcat([PATH,name,'.jpg'])); //wrongly balanced image

//White Patch Algorithm
Rw = I(450,850,1); //white value
Gw = I(450,850,2);
Bw = I(450,850,3);

R = I(:,:,1)/Rw;
G = I(:,:,2)/Gw;
B = I(:,:,3)/Bw;

R(find(R>1)) = 1;
G(find(G>1)) = 1;
B(find(B>1)) = 1;

I2 = [];
I2(:,:,1) = R;
I2(:,:,2) = G;
I2(:,:,3) = B;

//scf(); imshow(I2,[]);
imwrite(I2,strcat([PATH,name,'_wpa_image','.jpg']));


//Gray World Algorithm
//Rw = sum(I(:,:,1))/length(I(:,:,1));
//Gw = sum(I(:,:,2))/length(I(:,:,2));
//Bw = sum(I(:,:,3))/length(I(:,:,3));

Rw = mean(I(:,:,1));
Gw = mean(I(:,:,2));
Bw = mean(I(:,:,3));

R = I(:,:,1)/Rw;
G = I(:,:,2)/Gw;
B = I(:,:,3)/Bw;

R(find(R>1)) = 1;
G(find(G>1)) = 1;
B(find(B>1)) = 1;

I3 = [];
I3(:,:,1) = R;
I3(:,:,2) = G;
I3(:,:,3) = B;

//scf(); imshow(I3,[]);
imwrite(I3,strcat([PATH,name,'_gwa_image','.jpg']));


Activity 10 - Binary Operations


In this activity, we aim to segment a particular region of interest from the background. We initially converted the image into binarized form so that the separation of the ROI from the background will be easier, taking into account the optimal threshold. We used the histogram of the image in order to determine the threshold value that will separate the ROI from the background. Figure 1 shows an image depicting normal cells imaged under a microscope. The threshold value used in binarizing the unit normal cell is 0.80. For the 12 subimages, a threshold value of 0.845 was used.

Figure 1. Image of scattered punched paper digitized using a flatbed scanner.

Figure 2. Image of a unit normal cell binarized in order to estimate the area of a unit cell.
This unit normal cell has an area of 530 pixels.
Figure 3. Histogram of the unit normal cell in Figure 2.
Figure 4. Histogram of the Total Normal Cells Area

Figure 5. Segmented Normal Cells

From Figure 5, the mean is found to be 521.85294 pixels with a standard deviation of 18.871387 .



Figure 6 shows an image of normal cells with 5 cancer cells. This image was binarized using a threshold value of 0.845. Figure 7 shows an image of a unit cancer cell. The same threshold value was used to binarized this image of unit cell. Using the histogram shown in Figure 8, the 5 cancer cells were segmented as shown in Figure 9. However, since some of the normal cells are overlapping, their combined area falls on the range of possible area of a cancer cell.


(a)
(b)
Figure 6. Normal Cells with 5 Cancer Cells

Figure 7. Unit Cancer Cell. This has an area of 1001 pixels.
Figure 8. Histogram of Total Area with Cancer Cell
Figure 9. Segmented Cancer Cell with some Normal cells.


Self-evaluation:
Credits:

Activity 8 - Enhancement in the Frequency Domain

In this activity, convolution theorem was applied in order to mask frequencies in the Fourier domain of an image with unwanted repetitive patterns. Filter masks were created to block out frequencies that produce unwanted patterns in an image. Recall that:


(1) The Fourier transform of a convolution of two functions in space is the product of the two functions' Fourier transform, that is, FT[f*g] = FG.

(2) The convolution theorem of a dirac delta and a function f(t) in a replication of f(t) in the location of the dirac delta, that is,


Equation 1

In the first part of the activity, the fourier transform of the binary images of two dots, two circles, two squares and two gaussians were taken and their corresponding modulus were observed (see Figure 1-4).



(a)

(b)

Figure 1. (a) Binary image of two dots along the x-axis and symmetric about the center.

(b) Modulus of the Fourier Transform




(a)
(b)
(c)
(d)
(e)

Figure 2. Binary image of two circles along the x-axis symmetric about the center

and modulus of the Fourier transform.

(a) radius = 2, (b) radius = 4, (c) radius = 8, (d) radius = 16, (e) radius = 32




(a)
(b)
(c)
(d)
(e)

Figure 3. Binary image of two squares along the x-axis symmetric about the center

and modulus of the Fourier transform.

(a) width = 2, (b) width = 4, (c) width = 8, (d)width = 16, (e) width = 32




(a)
(b)
(c)
(d)

(e)

Figure 4. Binary image of two Gaussians along the x-axis symmetric about the center

and modulus of the Fourier transform.

(a) variance = 2, (b) variance = 4, (c) variance = 8, (d)variance = 16, (e) variance = 32



Figure 5 shows the convolution of two matrices, 200x200 matrix A with 10 1's randomly located and 3x3 matrix d with a certain pattern. The 3x3 matrix used is defined as d = [1 0 1; 0 1 0; 1 0 1]. The convolution was taken by using the imconv() function of Scilab.



Figure 5. Convolution of two matrices.

200x200 matrix A with 10 1's randomly located and 3x3 matrix d with a certain pattern




(a)
(b)
(c)

Figure 6. Convolution of two matrices.

200x200 matrix A with 1's equally spaced and 3x3 matrix d with a certain pattern.

(a) 2 px apart, (b) 4 px apart, (c) 20 px apart


Figure 6 shows the Fourier transform and modulus of a 200x200 matrix with equally spaced 1's along x and y axis. Also, the results when the spacings were varied are also shown.



As mentioned, convolution theorem may be applied in order to remove unwanted repetitive patterns by enhancing the frequency domain of the image of interest. In this part of the activity, three images were used to illustrate how convolution theorem may be used.


Figure 7.a (left) shows an image of a fingerprint (downloaded from the internet). Figure 7.a (right) shows the an image of the same fingerprint with enhanced appearance of the ridges and absence of blothes. The image was enhanced by masking certain frequencies in the domain using gaussian function (see Figure 7.b) .



(a)

(b)

Figure 7. (a) Original Fingerprint (left) and Enhance Fingerprint (right).

(b) Filter Mask and the Frequency Domain

(taken from: http://www.papillon.ru/files/image/content/2008/pic/L_scan/smaz_paper_small.png)



Figure 8.a shows an image of the surface of the moon taken in 1967 prior to the Apollo missions. This image was further improved when the vertical lines were removed by filtering in the Fourier domain. Figure 8.b shows the matrix used to mask the unwanted frequency (horizontal line in the domain). Using the convoluton theorem, we obtained the new frequency in Figure 8.b.



(a)

(b)
Figure 8. " The two groups of irregularly shaped craters north and west of the landing site are secondaries from Sabine Crater. This view was obtained by the unmanned Lunar Orbiter V spacecraft in 1967 prior to the Apollo missions to the Moon. The black and white film was automatically developed onboard the spacecraft and subsequently digitized for transmission to Earth. The regularly spaced vertical lines are the result of combining individually digitized 'framelets' to make a composite photograph and the irregularly-shaped bright and dark spots are due to nonuniform film development. [NASA Lunar Orbiter photograph]".



Lastly, canvas weave patterns in a subimage of an oil painting from the UP Vargas Museum Collection (see Figure 9.a were removed by using the filter mask shown in Figure 9.b. Figure 10 shows an image when the inverse fourier transform the inverted filter mask was determined. The modulus was also determined and the image shown is seemed to be similar with the observed canvas weave patterns.



(a)
(b)

Figure 9. (a) canvas weave pattern removed show the brush strokes.


Figure 10. Canvas Weave Pattern


Self-Evaluation:

Credits:


Appendix: Source Code

/Activity 8.B Fingerprints : Ridge Enhancement

filename = 'fingerprintsmall';

//http://www.papillon.ru/files/image/content/2008/pic/L_scan/smaz_paper_small.png

fingerprint = gray_imread(strcat([PATH,filename,'.png']));


FT_fingerprint = abs(fftshift(fft2(fingerprint)));

Im = FT_fingerprint;

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_fft','.bmp']));


FT_fingerprint2 = log(abs(fftshift(fft2(fingerprint))));

Im = FT_fingerprint2;

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_log_fft','.bmp']));


//filter mask

[m,n] = size(fingerprint);

x = linspace(-m/2,m/2,m); //defines the range

y = linspace(-n/2,n/2,n);

[X,Y] = ndgrid(x,y); //creates two 2-D arrays of x and y coordinates

u = 0; //peak location

v = 0;

s1 = 84; //variance

FM1 = exp((-((X-u).^2)/s1^2) + (-((Y-v).^2)/s1^2));

s2 = 48;

FM2 = exp((-((X-u).^2)/s2^2) + (-((Y-v).^2)/s2^2));

s3 = 2;

FM3 = exp((-((X-u).^2)/s3^2) + (-((Y-v).^2)/s3^2));

filter_mask = FM1 - FM2 + FM3;

filter_mask(m/2,n/2) = 1;

imwrite(filter_mask, strcat([PATH,'filter_mask','.bmp']));


//filtering

filter_mask = gray_imread(strcat([PATH,'filter_mask.bmp']));

filtered_FFT = filter_mask.*FT_fingerprint2;

//filtered_FFT = imconv(fingerprint,filter_mask);

Im = (filtered_FFT);

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_filtered_FFT','.bmp']));


//enhancement

enhanced = abs(ifft(fftshift(filter_mask).*fft2(fingerprint)));

Im = (enhanced);

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_enhanced','.bmp']));



//difference of original to processed

original = gray_imread(strcat([PATH,filename,'.png']));

processed = gray_imread(strcat([PATH,filename,'_enhanced','.bmp']));

Imdiff = original - processed;

imwrite(Imdiff, strcat([PATH,'difference_',filename,'.bmp']));







//Activity 8.C Lunar Landing Scanned Pictures : Line removal

filename = 'lunar';

Lunar = gray_imread(strcat([PATH,filename,'.bmp']));

imwrite(Lunar, strcat([PATH,filename,'_grayscale','.bmp']));

[m,n] = size(Lunar);


//Fourier transform

FT_lunar = abs(fftshift(fft2(Lunar)));

Im = FT_lunar;

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_fft','.bmp']));


FT_lunar2 = log(abs(fftshift(fft2(Lunar))));

Im = FT_lunar2;

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_log_fft','.bmp']));


//vertical lines filter

filter_mask = zeros(m,n);

kx = 2;

ky = 16; //affectst the contrast

filter_mask(m/2-kx:m/2+kx,1:n/2-ky) = 1;

filter_mask(m/2-kx:m/2+kx,n/2+ky:n) = 1;

//filter_mask(1:m/2-ky,n/2-kx:n/2+kx) = 1;

//filter_mask(m/2+ky:m,n/2-kx:n/2+kx) = 1;

filter_mask = 1 - filter_mask;

imwrite(filter_mask, strcat([PATH,filename,'_filter_mask','.bmp']));


//filtering

filter_mask = gray_imread(strcat([PATH,filename,'_filter_mask.bmp']));

filtered_FFT = filter_mask.*FT_lunar2;

//filtered_FFT = imconv(Lunar,filter_mask);

Im = (filtered_FFT);

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_filtered_FFT','.bmp']));


//enhancement

enhanced = abs(ifft(fftshift(filter_mask).*fft2(Lunar)));

Im = (enhanced);

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_enhanced','.bmp']));


//difference of original to processed

original = gray_imread(strcat([PATH,filename,'.bmp']));

processed = gray_imread(strcat([PATH,filename,'_enhanced','.bmp']));

Imdiff = original - processed;

imwrite(Imdiff, strcat([PATH,filename,'_difference','.bmp']));




//Activity 8.D Canvas Weave Modeling and Removal

filename = 'canvasweave';

canvasweave = gray_imread(strcat([PATH,filename,'.jpg']));

imwrite(canvasweave, strcat([PATH,filename,'_grayscale','.bmp']));


//Fourier transform

FT_canvasweave = abs(fftshift(fft2(canvasweave)));

Im = FT_canvasweave;

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_fft','.bmp']));


FT_canvasweave2 = log(abs(fftshift(fft2(canvasweave))));

Im = FT_canvasweave2;

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_log_fft','.bmp']));


//circular filter

[m,n] = size(canvasweave);

x = linspace(1,m,m); //defines the range

y = linspace(1,n,n);

[X,Y] = ndgrid(x,y); //creates two 2-D arrays of x and y coordinates

filter_mask = ones(m,n);

R = 6;

v = [290 290 290 290 244 335 244 336 198 381]; //peak location

u = [163 115 256 302 186 185 233 231 210 208];

for i = 1:10

r = sqrt((X-u(1,i)).^2 + (Y-v(1,i)).^2);

filter_mask(find(r

end

imwrite(filter_mask, strcat([PATH,filename,'_filter_mask','.bmp']));


filter_mask = 1 - filter_mask;

inv_filter_mask = log(abs(fftshift(fft2(filter_mask))));

Im = (inv_filter_mask);

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_filter_mask_inverse','.bmp']));


//filtering

filter_mask = gray_imread(strcat([PATH,filename,'_filter_mask.bmp']));

filtered_FFT = filter_mask.*FT_canvasweave2;

//filtered_FFT = imconv(canvasweave,filter_mask);

Im = (filtered_FFT);

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_filtered_FFT','.bmp']));


//enhancement

enhanced = abs(ifft(fftshift(filter_mask).*fft2(canvasweave)));

Im = (enhanced);

Im = (Im - min(Im))/(max(Im)-min(Im));

imwrite(Im, strcat([PATH,filename,'_enhanced','.bmp']));


//difference of original to processed

original = gray_imread(strcat([PATH,filename,'.jpg']));

processed = gray_imread(strcat([PATH,filename,'_enhanced','.bmp']));

Imdiff = original - processed;

imwrite(Imdiff, strcat([PATH,filename,'_difference','.bmp']));