MATLAB Histogram equalization on GIF images - matlab

I'm having a bit of trouble understanding how to change a colormap of a grayscale GIF image after performing histogram equalization on the image. The process is perfectly simple with image compression types that don't have an associated colormap, such as JPEG, and I've gotten it to work with grayscale JPEG images.
clear
clc
[I,map] = imread('moon.gif');
h = zeros(256,1); %array value holds number of pixels with same value
hmap = zeros(256,1);
P = zeros(256,1); %probability that pixel intensity will appear in image
Pmap = zeros(256,1);
s = zeros(256,1); %calculated CDF using P
smap = zeros(256,1);
M = size(I,1);
N = size(I,2);
I = double(I);
Inew = double(zeros(M,N));
mapnew = zeros(256,3);
for x = 1:M;
for y = 1:N;
for l = 1:256;
%count pixel intensities and probability
end
end
end
for j = 2:256
for i = 2:j
%calculate CDF of P
end
end
s(1) = P(1);
smap(1) = Pmap(1);
for x = 1:M;
for y = 1:N;
for l = 1:256;
%calculates adjusted CDF and sets it to new image
end
end
end
mapnew = mapnew/256;
Inew = uint8(Inew);
I = uint8(I);
subplot(1,2,1), imshow(Inew,map); %comparing the difference between original map
subplot(1,2,2), imshow(Inew,mapnew); %to'enhanced' colormap, but both turn out poorly
All is fine in terms of the equalization of the actual image, but I'm not sure what to change about the color map. I tried performing the same operations on the colormap that I did with the image, but no dice.
Sorry that I can't post images cause of my low rep, but I'll try and provide all the info I can on request.
Any help would be greatly appreciated.

function J=histeqo(I)
J=I;
[m,n]=size(I);
[h,d]=imhist(I);
ch=cumsum(h); // The cumulative frequency
imagesize=(m*n); // The image size lightsize=size(d,1);// The Lighting range
tr=ch*(lightsize/imagesize); // Adjustment function
for x=1:m
for y=1:n
J(x,y)=tr(I(x,y)+1);
end
end
subplot(1,2,1);imshow(J);
subplot(1,2,2);imhist(J);
end

Related

eigenvalue decomposition of structure tensor in matlab

I have a synthetic image. I want to do eigenvalue decomposition of local structure tensor (LST) of it for some edge detection purposes. I used the eigenvaluesl1 , l2 and eigenvectors e1 ,e2 of LST to generate an adaptive ellipse for each pixel of image. Unfortunately I get unequal eigenvalues l1 , l2 and so unequal semi-axes length of ellipse for homogeneous regions of my figure:
However I get good response for a simple test image:
I don't know what is wrong in my code:
function [H,e1,e2,l1,l2] = LST_eig(I,sigma1,rw)
% LST_eig - compute the structure tensor and its eigen
% value decomposition
%
% H = LST_eig(I,sigma1,rw);
%
% sigma1 is pre smoothing width (in pixels).
% rw is filter bandwidth radius for tensor smoothing (in pixels).
%
n = size(I,1);
m = size(I,2);
if nargin<2
sigma1 = 0.5;
end
if nargin<3
rw = 0.001;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% pre smoothing
J = imgaussfilt(I,sigma1);
% compute gradient using Sobel operator
Sch = [-3 0 3;-10 0 10;-3 0 3];
%h = fspecial('sobel');
gx = imfilter(J,Sch,'replicate');
gy = imfilter(J,Sch','replicate');
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% compute tensors
gx2 = gx.^2;
gy2 = gy.^2;
gxy = gx.*gy;
% smooth
gx2_sm = imgaussfilt(gx2,rw); %rw/sqrt(2*log(2))
gy2_sm = imgaussfilt(gy2,rw);
gxy_sm = imgaussfilt(gxy,rw);
H = zeros(n,m,2,2);
H(:,:,1,1) = gx2_sm;
H(:,:,2,2) = gy2_sm;
H(:,:,1,2) = gxy_sm;
H(:,:,2,1) = gxy_sm;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% eigen decomposition
l1 = zeros(n,m);
l2 = zeros(n,m);
e1 = zeros(n,m,2);
e2 = zeros(n,m,2);
for i = 1:n
for j = 1:m
Hmat = zeros(2);
Hmat(:,:) = H(i,j,:,:);
[V,D] = eigs(Hmat);
D = abs(D);
l1(i,j) = D(1,1); % eigen values
l2(i,j) = D(2,2);
e1(i,j,:) = V(:,1); % eigen vectors
e2(i,j,:) = V(:,2);
end
end
Any help is appreciated.
This is my ellipse drawing code:
% determining ellipse parameteres from eigen value decomposition of LST
M = input('Enter the maximum allowed semi-major axes length: ');
I = input('Enter the input data: ');
row = size(I,1);
col = size(I,2);
a = zeros(row,col);
b = zeros(row,col);
cos_phi = zeros(row,col);
sin_phi = zeros(row,col);
for m = 1:row
for n = 1:col
a(m,n) = (l2(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
b(m,n) = (l1(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
cos_phi1 = e1(m,n,1);
sin_phi1 = e1(m,n,2);
len = hypot(cos_phi1,sin_phi1);
cos_phi(m,n) = cos_phi1/len;
sin_phi(m,n) = sin_phi1/len;
end
end
%% plot elliptic structuring elements using parametric equation and superimpose on the image
figure; imagesc(I); colorbar; hold on
t = linspace(0,2*pi,50);
for i = 10:10:row-10
for j = 10:10:col-10
x0 = j;
y0 = i;
x = a(i,j)/2*cos(t)*cos_phi(i,j)-b(i,j)/2*sin(t)*sin_phi(i,j)+x0;
y = a(i,j)/2*cos(t)*sin_phi(i,j)+b(i,j)/2*sin(t)*cos_phi(i,j)+y0;
plot(x,y,'r','linewidth',1);
hold on
end
end
This my new result with the Gaussian derivative kernel:
This is the new plot with axis equal:
I created a test image similar to yours (probably less complicated) as follows:
pos = yy([400,500]) + 100 * sin(xx(400)/400*2*pi);
img = gaussianlineclip(pos+50,7) + gaussianlineclip(pos-50,7);
I = double(stretch(img));
(This requires DIPimage to run)
Then ran your LST_eig on it (sigma1=1 and rw=3) and your code to draw ellipses (no change to either, except adding axis equal), and got this result:
I suspect some non-uniformity in some of the blue areas of your image, which cause very small gradients to appear. The problem with the definition of the ellipses as you use them is that, for sufficiently oriented patterns, you'll get a line even if that pattern is imperceptible. You can get around this by defining your ellipse axes lengths as follows:
a = repmat(M,size(l2)); % longest axis is always the same
b = M ./ (l2+1); % shortest axis is shorter the more important the largest eigenvalue is
The smallest eigenvalue l1 is high in regions with strong gradients but no clear direction. The above does not take this into account. One option could be to make a depend on both energy and anisotropy measures, and b depend only on energy:
T = 1000; % some threshold
r = M ./ max(l1+l2-T,1); % circle radius, smaller for higher energy
d = (l2-l1) ./ (l1+l2+eps); % anisotropy measure in range [0,1]
a = M*d + r.*(1-d); % use `M` length for high anisotropy, use `r` length for high isotropy (circle)
b = r; % use `r` width always
This way, the whole ellipse shrinks if there are strong gradients but no clear direction, whereas it stays large and circular when there are only weak or no gradients. The threshold T depends on image intensities, adjust as needed.
You should probably also consider taking the square root of the eigenvalues, as they correspond to the variance.
Some suggestions:
You can write
a = (l2+eps)./(l1+l2+2*eps) * M;
b = (l1+eps)./(l1+l2+2*eps) * M;
cos_phi = e1(:,:,1);
sin_phi = e1(:,:,2);
without a loop. Note that e1 is normalized by definition, there is no need to normalize it again.
Use Gaussian gradients instead of Gaussian smoothing followed by Sobel or Schaar filters. See here for some MATLAB implementation details.
Use eig, not eigs, when you need all eigenvalues. Especially for such a small matrix, there is no advantage to using eigs. eig seems to produce more consistent results. There is no need to take the absolute value of the eigenvalues (D = abs(D)), as they are non-negative by definition.
Your default value of rw = 0.001 is way too small, a sigma of that size has no effect on the image. The goal of this smoothing is to average gradients in a local neighborhood. I used rw=3 with good results.
Use DIPimage. There is a structuretensor function, Gaussian gradients, and a lot more useful stuff. The 3.0 version (still in development) is a major rewrite that improves significantly on dealing with vector- and matrix-valued images. I can write all of your LST_eig as follows:
I = dip_image(I);
g = gradient(I, sigma1);
H = gaussf(g*g.', rw);
[e,l] = eig(H);
% Equivalences with your outputs:
l1 = l{2};
l2 = l{1};
e1 = e{2,:};
e2 = e{1,:};

Get the size of HOG feature vector - MATLAB

I'm a beginner in image processing and I'm using MATLAB to extract HOG features from the images to train SVM classifier. The size of the training images is 480*640 pixels and I'm getting 167796 features with the default settings for the built-in MATLAB extractHOGFeatures function. However, when I test the model it gives me less features (216 features only!) knowing that the testing images have the same size of the training images. I get this error in MATLAB "The number of columns in TEST and training data must be equal".
Do you have any clue how to solve this problem and get feature vector with the same size for the training and testing sets?
Here is the code,
[fpos,fneg] = featuress(pathPos, pathNeg);
%train SVM
HOG_featV = loadingV(fpos,fneg); % loading and labeling each training example
%% Detection
tSize = [24 32];
testImPath = '.\face_detection\dataset\bikes_and_persons2\';
imlist = dir([testImPath '*.bmp']);
for j = 1:length(imlist)
disp ('inside for loop');
img = imread([testImPath imlist(j).name]);
axis equal; axis tight; axis off;
imshow(img); hold on;
detect(img,model,tSize);
%% training
function [fpos, fneg] = featuress(pathPos,pathNeg)
% extract features for positive examples
imlist = dir([pathPos '*.bmp']);
for i = 1:length(imlist)
im = imread([pathPos imlist(i).name]);
fpos{i} = extractHOGFeatures(double(im));
end
% extract features for negative examples
imlist = dir([pathNeg '*.bmp']);
for i = 1:length(imlist)
im = imread([pathNeg imlist(i).name]);
fneg{i} = extractHOGFeatures(double(im));
end
end
%% testing function
function detect(im,model,wSize)
topLeftRow = 1;
topLeftCol = 1;
[bottomRightCol bottomRightRow d] = size(im);
fcount = 1;
for y = topLeftCol:bottomRightCol-wSize(2)
for x = topLeftRow:bottomRightRow-wSize(1)
p1 = [x,y];
p2 = [x+(wSize(1)-1), y+(wSize(2)-1)];
po = [p1; p2];
img = imcut(po,im);
featureVector{fcount} = extractHOGFeatures(double(img));
boxPoint{fcount} = [x,y];
fcount = fcount+1;
x = x+1;
end
end
lebel = ones(length(featureVector),1);
P = cell2mat(featureVector');
% each row of P' correspond to a window
[ predictions] = svmclassify(model, P); % classifying each window
[a, indx]= max(predictions);
bBox = cell2mat(boxPoint(indx));
rectangle('Position',[bBox(1),bBox(2),24,32],'LineWidth',1, 'EdgeColor','r');
end
Thanks in advance.
What's the size of P? Is it 167796 x 216? If so then, you should not transpose featureVector when you call cell2mat. Or you should transpose P before you use it. You can also make featureVector a matrix rather than a cell array. Since you know that the length of the HOG vector is 167796 and you know how many images you have, you can pre-allocate it up front, and fill in the rows.

Image rectification algorithm in Matlab

I have recently found an interesting article regarding image rectification for two stereo image pairs. I liked the algorithm because it was very compact and from what the article suggested it did the right thing. After I implemented the matlab version on two images, I didn't get a correct rectified image. I got an image that was pitch black apart from the left and down line which had pixels. In the image there also were some gray pixels from the original image but just a hand full. I posted below the matlab code, and the link to the article and also an example of the result I got for one image (for the other image it was the same)
This is the link to the article A compact algorithm for rectification of stereo pairs.
A screen shot with the initial images and the results is bellow:
The initial images are the following two(such that you do not have to search for another stereo pair) :
function [T1,T2,Pn1,Pn2] = rectify(Po1,Po2)
% RECTIFY: compute rectification matrices
% factorize old PPMs
[A1,R1,t1] = art(Po1);
[A2,R2,t2] = art(Po2);
% optical centers (unchanged)
c1 = - inv(Po1(:,1:3))*Po1(:,4);
c2 = - inv(Po2(:,1:3))*Po2(:,4);
% new x axis (= direction of the baseline)
v1 = (c1-c2);
% new y axes (orthogonal to new x and old z)
v2 = cross(R1(3,:)',v1);
% new z axes (orthogonal to baseline and y)
v3 = cross(v1,v2);
% new extrinsic parameters
R = [v1'/norm(v1)
v2'/norm(v2)
v3'/norm(v3)];
% translation is left unchanged
% new intrinsic parameters (arbitrary)
A = (A1 + A2)./2;
A(1,2)=0; % no skew
A(1,3) = A(1,3) + 160;
% new projection matrices
Pn1 = A * [R -R*c1 ];
Pn2 = A * [R -R*c2 ];
% rectifying image transformation
T1 = Pn1(1:3,1:3)* inv(Po1(1:3,1:3));
T2 = Pn2(1:3,1:3)* inv(Po2(1:3,1:3));
function [A,R,t] = art(P)
% ART: factorize a PPM as P=A*[R;t]
Q = inv(P(1:3, 1:3));
[U,B] = qr(Q);
R = inv(U);
t = B*P(1:3,4);
A = inv(B);
A = A ./A(3,3);
This is the "main" code from which I call my rectify function
img1 = imread('D:\imag1.png');
img2 = imread('D:\imag2.png');
im1 = rgb2gray(img1);
im2 = rgb2gray(img2);
im1 = im2double(im1);
im2 = im2double(im2);
figure; imshow(im1, 'border', 'tight')
figure; imshow(im2, 'border', 'tight')
%pair projection matrices obtained after the calibration P01,P02
a = double(9.765*(10^2))
b = double(5.790*(10^-1))
format bank;
Po1 = double([a 5.382*10 -2.398*(10^2) 3.875*(10^5);
9.849*10 9.333*(10^2) 1.574*(10^2) 2.428*(10^5);
b 1.108*(10^(-1)) 8.077*(10^(-1)) 1.118*(10^3)]);
Po2 = [9.767*(10^2) 5.376*10 -2.400*(10^2) 4.003*(10^4);
9.868*10 9.310*(10^2) 1.567*(10^2) 2.517*(10^5);
5.766*(10^(-1)) 1.141*(10^(-1)) 8.089*(10^(-1)) 1.174*(10^3)];
[T1, T2, Pn1, Pn2] = rectify(Po1, Po2);
imnoua = conv2(im1, T1);
imnoua2 = conv2(im2, T2);
fprintf('Imaginea noua e \n');
figure; imshow(imnoua, 'border', 'tight')
figure; imshow(imnoua2, 'border', 'tight')
Thank you for your time!
As Shai says, T1 and T2 are projective transformation matrices, not filter kernels. You should be using imwarp, rather than conv2:
imnoua = imwarp(im1, projective2d(T1));
imnoua2 = imwarp(im2, projective2d(T2));
Better yet, use rectifyStereoImages from the Computer Vision System Toolbox. Check out this example.

Converting code to take RGB image instead of grayscale

I have this code converting a fisheye image into rectangular form but the code is only able to perform this operation on a grayscale image. Can anybody help converting the code to perform the operation on a RGB image. The code is as follows:
edit: I have updated the code to contain a functionality which performs interpolation in each color channel. But this seem to disform the output image. See pictures below
function imP = FISHCOLOR (imR)
rMin=0.1;
rMax=1;
[Mr, Nr, Dr] = size(imR); % size of rectangular image
xRc = (Mr+1)/2; % co-ordinates of the center of the image
yRc = (Nr+1)/2;
sx = (Mr-1)/2; % scale factors
sy = (Nr-1)/2;
M=size(imR,1);N=size(imR,2);
dr = (rMax - rMin)/(M-1);
dth = 2*pi/N;
r=rMin:dr:rMin+(M-1)*dr;
th=(0:dth:(N-1)*dth)';
[r,th]=meshgrid(r,th);
x=r.*cos(th);
y=r.*sin(th);
xR = x*sx + xRc;
yR = y*sy + yRc;
imP =zeros(M, N); % initialize the final matrix
for k=1:3 % colors
T = imR(:,:,k);
Ichannel = interp2(T,xR,yR);
imP(:,:,k)= Ichannel; % add k channel
end
SOLVED
Input image <- Image link
Grayscale output, what i would like in color <- Image link
Try changing these three lines:
[Mr Nr] = size(imR); % size of rectangular image
...
imP = zeros(M, N);
...
imP = interp2(imR, xR, yR); %interpolate (imR, xR, yR);
...to these:
[Mr Nr Pr] = size(imR); % size of rectangular image
...
imP = zeros(M, N, Pr);
...
for dim = 1:Pr
imP(:,:,dim) = interp2(imR(:,:,dim), xR, yR); %interpolate (imR, xR, yR);
end

calculate Euclidean distance of two image in hsv color space in matlab

i use the code below to calculate the Euclidean distance for two rgb images:
Im1 = imread(filename1);
Im1 = rgb2gray(Im1);
hn1 = imhist(Im1)./numel(Im1);
Im2 = imread(filename2);
Im2 = rgb2gray(Im2);
hn2 = imhist(Im2)./numel(Im2);
f = norm(hn1-hn2);
and it gives me the correct answer
but now i want to use the code for two images in hsv color mode but it wont work on it
cause all of the above code is in a 2d space while hsv is 1d
is there any specific code for calculating Euclidean distance of two image in hsv color space?
the images format are jpeg
You need to create a histogram for each channel seperatetly
function hst = im2hsvHist( img )
%
% computes three channels histogram in HSV color space
%
n = 256; % number of bins per hist (per channel)
hsvImg = rgb2hsv( img );
hst = zeros(n,3);
for ci = 1:3
hst(:,ci) = imhist( hsvImg(:,:,ci ) , n );
end
hst = hst(:) ./ n; % to 3*n vector, normalize by n and not 3n
Using this function you can compute the image to image distance in hsv space
Im1 = imread(filename1);
hst1 = im2hsvHist(Im1);
Im2 = imread(filename2);
hst2 = im2hsvDist(Im2);
f = norm( hst1 - hst2 );
Sneak a peek for a vectorized version of im2hsvHist:
n = 256;
hsvImg = rgb2hsv( img );
hst = hist( reshape(hsvImg, [], 3), 255 ); % instead of loop!
hst = hst(:) / n;